Solstice DiskSuite software requires some additional space on the multihost disks and imposes some restrictions on its use. For example, if you are using UNIX file system (UFS) logging under Solstice DiskSuite, one to two percent of each multihost disk must be reserved for metadevice state database replicas and UFS logging. Refer to Appendix B, Configuring Solstice DiskSuite, and to the Solstice DiskSuite documentation for specific guidelines and restrictions.
All metadevices used by each shared diskset are created in advance, at reconfiguration boot time, based on settings found in the md.conf file. The fields in md.conf file are described in the Solstice DiskSuite documentation. The two fields that are used in the Sun Cluster configuration are md_nsets and nmd. The md_nsets field defines the number of disksets and the mnd field defines the number of metadevices to create for each diskset. You should set these fields at install time to allow for all predicted future expansion of the cluster.
Extending the Solstice DiskSuite configuration after the cluster is in production is time consuming because it requires a reconfiguration reboot for each node and always carries the risk that there will not be enough space allocated in the root (/) file system to create all of the requested devices.
The value of md_nsets must be set to the expected number of logical hosts in the cluster, plus one to allow Solstice DiskSuite to manage the private disks on the local host (that is, those metadevices that are not in the local diskset).
The value of nmd must be set to the predicted largest number of metadevices used by any one of the disksets in the cluster. For example, if a cluster uses 10 metadevices in its first 15 disksets, but 1000 metadevices in the 16th diskset, nmd must be set to at least 1000.
All cluster nodes (or cluster pairs in the cluster pair topology) must have identical md.conf files, regardless of the number of logical hosts served by each node. Failure to follow this guideline can result in serious Solstice DiskSuite errors and possible loss of data.
Consider these points when planning your Solstice DiskSuite file system layout:
The HA administrative file system cannot be grown using growfs(1M).
You must create mount points for other file systems at the /logicalhost level.
Your application might dictate a file system hierarchy and naming convention. Sun Cluster imposes no restrictions on file system naming, as long as names do not conflict with data service required directories.
Use the partitioning scheme described in Table 2-2 for the majority of drives.
In general, if UFS logs are created, the default size for Slice 6 should be 1 percent of the size of the largest multihost disk found on the system.
The overlap of Slices 6 and 0 by Slice 2 is used for raw devices where there are no UFS logs.
In addition, the first drive on each of the first two controllers in each of the disksets should be partitioned as described in Table 2-3.
Each disk group has an HA administrative file system associated with it. This file system is not NFS-shared. It is used for data service specific state or configuration information.
Partition 7 is always reserved for use by Solstice DiskSuite as the first or last 2 Mbytes on each multihost disk.
Slice |
Description |
---|---|
7 |
2 Mbytes, reserved for Solstice DiskSuite |
6 |
UFS logs |
0 |
Remainder of the disk |
2 |
Overlaps Slice 6 and 0 |
Table 2-3 Multihost Disk Partitioning for the First Drive on the First Two Controllers
Slice |
Description |
---|---|
7 |
2 Mbytes, reserved for Solstice DiskSuite |
5 |
2 Mbytes, UFS log for HA administrative file systems |
4 |
9 Mbytes, UFS master for HA administrative file systems |
6 |
UFS logs |
0 |
Remainder of the disk |
2 |
Overlaps Slice 6 and 0 |