This chapter provides the following procedures:
This section provides procedures to create cluster file systems to support data services.
Perform this procedure for each cluster file system that you want to create. Unlike a local file system, a cluster file system is accessible from any node in the cluster.
Alternatively, you can use a highly available local file system to support a data service. For information about choosing between creating a cluster file system or a highly available local file system to support a particular data service, see the manual for that data service. For general information about creating a highly available local file system, see Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Perform the following tasks:
Ensure that you installed software packages for the Solaris OS, Sun Cluster framework, and other products as described in Installing the Software.
Ensure that you established the new cluster or cluster node as described in Establishing a New Cluster or New Cluster Node.
If you are using a volume manager, ensure that volume-management software is installed and configured. For volume-manager installation procedures, see Configuring Solaris Volume Manager Software or Installing and Configuring VxVM Software.
If you added a new node to a cluster that uses VxVM, you must do one of the following tasks:
Install VxVM on that node.
Modify that node's /etc/name_to_major file to support coexistence with VxVM.
Follow procedures in How to Install VERITAS Volume Manager Software to perform one of these required tasks.
Determine the mount options to use for each cluster file system that you want to create. See Choosing Mount Options for Cluster File Systems.
Become superuser on any node in the cluster.
For Solaris, you must perform this procedure from the global zone if non-global zones are configured in the cluster.
For faster file-system creation, become superuser on the current primary of the global device for which you create a file system.
Create a file system.
Any data on the disks is destroyed when you create a file system. Be sure that you specify the correct disk device name. If you specify the wrong device name, you might erase data that you did not intend to delete.
For a UFS file system, use the newfs(1M) command.
phys-schost# newfs raw-disk-device |
The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.
Volume Manager |
Sample Disk Device Name |
Description |
---|---|---|
Solaris Volume Manager |
/dev/md/nfs/rdsk/d1 |
Raw disk device d1 within the nfs disk set |
VERITAS Volume Manager |
/dev/vx/rdsk/oradg/vol01 |
Raw disk device vol01 within the oradg disk group |
None |
/dev/global/rdsk/d1s3 |
Raw disk device d1s3 |
SPARC: For a VERITAS File System (VxFS) file system, follow the procedures that are provided in your VxFS documentation.
On each node in the cluster, create a mount-point directory for the cluster file system.
A mount point is required on each node, even if the cluster file system is not accessed on that node.
For ease of administration, create the mount point in the /global/device-group/ directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.
phys-schost# mkdir -p /global/device-group/mountpoint/ |
Name of the directory that corresponds to the name of the device group that contains the device.
Name of the directory on which to mount the cluster file system.
On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.
See the vfstab(4) man page for details.
If non-global zones are configured in the cluster, ensure that you mount cluster file systems in the global zone on a path in the global zone's root directory.
In each entry, specify the required mount options for the type of file system that you use.
Do not use the logging mount option for Solaris Volume Manager transactional volumes. Transactional volumes provide their own logging.
In addition, Solaris Volume Manager transactional-volume logging is removed from the Solaris 10 OS. Solaris UFS logging provides the same capabilities but superior performance, as well as lower system administration requirements and overhead.
To automatically mount the cluster file system, set the mount at boot field to yes.
Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.
Ensure that the entries in each node's /etc/vfstab file list devices in the same order.
Check the boot order dependencies of the file systems.
For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle/, and phys-schost-2 mounts disk device d1 on /global/oracle/logs/. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs/ only after phys-schost-1 boots and mounts /global/oracle/.
On any node in the cluster, run the configuration check utility.
phys-schost# sccheck |
The configuration check utility verifies that the mount points exist. The utility also verifies that /etc/vfstab file entries are correct on all nodes of the cluster. If no errors occur, nothing is returned.
For more information, see the sccheck(1M) man page.
Mount the cluster file system.
phys-schost# mount /global/device-group/mountpoint/ |
For UFS, mount the cluster file system from any node in the cluster.
SPARC: For VxFS, mount the cluster file system from the current master of device-group to ensure that the file system mounts successfully.
In addition, unmount a VxFS file system from the current master of device-group to ensure that the file system unmounts successfully.
To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.
On each node of the cluster, verify that the cluster file system is mounted.
You can use either the df command or mount command to list mounted file systems. For more information, see the df(1M) man page or mount(1M) man page.
For the Solaris 10 OS, cluster file systems are accessible from both the global zone and the non-global zone.
The following example creates a UFS cluster file system on the Solaris Volume Manager volume /dev/md/oracle/rdsk/d1. An entry for the cluster file system is added to the vfstab file on each node. Then from one node the sccheck command is run. After configuration check processing is completes successfully, the cluster file system is mounted from one node and verified on all nodes.
phys-schost# newfs /dev/md/oracle/rdsk/d1 … phys-schost# mkdir -p /global/oracle/d1 phys-schost# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging … phys-schost# sccheck phys-schost# mount /global/oracle/d1 phys-schost# mount … /global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles on Sun Oct 3 08:56:16 2005 |
Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.
To create non-global zones on a node, go to How to Create a Non-Global Zone on a Cluster Node.
SPARC: To configure Sun Management Center to monitor the cluster, go to SPARC: Installing the Sun Cluster Module for Sun Management Center.
Install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
This section provides procedures to create a non-global zone on a cluster node.
Perform this procedure for each non-global zone that you create in the cluster.
For complete information about installing a zone, refer to System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
You can configure a Solaris 10 non-global zone, simply referred to as a zone, on a cluster node while the node is booted in either cluster mode or in noncluster mode.
If you create a zone while the node is booted in noncluster mode, the cluster software discovers the zone when the node joins the cluster.
If you create or remove a zone while the node is in cluster mode, the cluster software dynamically changes its list of zones that can master resource groups.
Perform the following tasks:
Plan your non-global zone configuration. Observe the requirements and restrictions in Guidelines for Non-Global Zones in a Cluster.
Have available the following information:
The total number of non-global zones that you will create.
The public adapter and public IP address that each zone will use.
The zone path for each zone. This path must be a local file system, not a cluster file system or a highly available local file system.
One or more devices that should appear in each zone.
(Optional) The name that you will assign each zone.
If you will assign the zone a private IP address, ensure that the cluster IP address range can support the additional private IP addresses that you will configure. Use the cluster show-netprops command to display the current private-network configuration.
If the current IP address range is not sufficient to support the additional private IP addresses that you will configure, follow the procedures in How to Change the Private Network Configuration When Adding Nodes or Private Networks to reconfigure the private IP address range.
For additional information, see Zone Components in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Become superuser on the node on which you are creating the non-global zone.
You must be in the global zone.
For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.
If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.
phys-schost# svcs multi-user-server node STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default |
Configure, install, and boot the new zone.
You must set the autoboot property to true to support resource-group functionality in the non-global zone.
Follow procedures in the following documentation:
Perform procedures in Chapter 18, Planning and Configuring Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Perform procedures in Installing and Booting Zones in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Perform procedures in How to Boot a Zone in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Verify that the zone is in the ready state.
phys-schost# zoneadm list -v ID NAME STATUS PATH 0 global running / 1 my-zone ready /zone-path |
(Optional) Assign a private IP address and a private hostname to the zone.
The following command chooses and assigns an available IP address from the cluster's private IP address range. The command also assigns the specified private hostname, or host alias, to the zone and maps it to the assigned private IP address.
phys-schost# clnode set -p zprivatehostname=hostalias node:zone |
Specifies a property.
Specifies the zone private hostname, or host alias.
The name of the node.
The name of the non-global zone.
Perform the initial internal zone configuration.
Follow the procedures in Performing the Initial Internal Zone Configuration in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones. Choose either of the following methods:
Log in to the zone
Use an /etc/sysidcfg file
In the non-global zone, modify the nsswitch.conf file.
You must make these changes to enable the zone to resolve searches for cluster-specific hostnames and IP addresses.
Log in to the zone.
phys-schost# zogin -c zonename |
Open the /etc/nsswitch.conf file for editing.
phys-schost# vi /etc/nsswitch.conf |
Add the cluster switch to the beginning of the lookups for the hosts and netmasks entries.
The modified entries would appear similar to the following:
… hosts: cluster files nis [NOTFOUND=return] … netmasks: cluster files nis [NOTFOUND=return] … |
To install an application in a non-global zone, use the same procedure as for a standalone system. See your application's installation documentation for procedures to install the software in a non-global zone. Also see Adding and Removing Packages and Patches on a Solaris System With Zones Installed (Task Map) in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
To install and configure a data service in a non-global zone, see the Sun Cluster manual for the individual data service.