Consider the following points when you plan Solstice DiskSuite/Solaris Volume Manager configurations:
Local metadevice names or volume names – The name of each local Solstice DiskSuite metadevice or Solaris Volume Manager volume must be unique throughout the cluster. Also, the name cannot be the same as any device-ID name.
Mediators – Each diskset configured with exactly two disk strings and mastered by exactly two nodes must have Solstice DiskSuite/Solaris Volume Manager mediators configured for the diskset. A disk string consists of a disk enclosure, its physical disks, cables from the enclosure to the node(s), and the interface adapter cards. Observe the following rules to configure mediators:
You must configure each diskset with exactly two nodes that act as mediator hosts.
You must use the same two nodes for all disksets that require mediators. Those two nodes must master those disksets.
Mediators cannot be configured for disksets that do not meet the two-string and two-host requirements.
See the mediator(7D) man page for details.
/kernel/drv/md.conf settings – All Solstice DiskSuite metadevices or Solaris Volume Manager volumes used by each diskset are created in advance, at reconfiguration boot time. This reconfiguration is based on the configuration parameters that exist in the /kernel/drv/md.conf file.
All cluster nodes must have identical /kernel/drv/md.conf files, regardless of the number of disksets that are served by each node. Failure to follow this guideline can result in serious Solstice DiskSuite/Solaris Volume Manager errors and possible loss of data.
You must modify the nmd and md_nsets fields as follows to support a Sun Cluster configuration:
md_nsets – The md_nsets field defines the total number of disksets that can be created for a system to meet the needs of the entire cluster. Set the value of md_nsets to the expected number of disksets in the cluster plus one additional diskset. Solstice DiskSuite/Solaris Volume Manager software uses the additional diskset to manage the private disks on the local host. The private disks are those metadevices or volumes that are not in the local diskset.
The maximum number of disksets that are allowed per cluster is 32. This number allows for 31 disksets for general use plus one diskset for private disk management. The default value of md_nsets is 4.
nmd – The nmd field defines the number of metadevices or volumes that are created for each diskset. Set the value of nmd to the predicted highest value of metadevice or volume name that is used by any one of the disksets in the cluster. For example, if a cluster uses 10 metadevices or volumes in its first 15 disksets, but 1000 metadevices or volumes in the 16th diskset, set the value of nmd to at least 1000. Also, the value of nmd must be large enough to ensure that enough numbers exist for each device–ID name. The number must also be large enough to ensure that each local metadevice name or local volume name can be unique throughout the cluster.
The highest allowed value of a metadevice or volume name per diskset is 8192. The default value of nmd is 128.
Set these fields at installation time to allow for all predicted future expansion of the cluster. To increase the value of these fields after the cluster is in production is time consuming. The value change requires a reconfiguration reboot for each node. To raise these values later also increases the possibility of inadequate space allocation in the root (/) file system to create all of the requested devices.
At the same time, keep the value of the nmdfield and the md_nsets field as low as possible. Memory structures exist for all possible devices as determined by nmdand md_nsets, even if you have not created those devices. For optimal performance, keep the value of nmd and md_nsets only slightly higher than the number of metadevices or volumes you plan to use.
See “System and Startup Files” in Solstice DiskSuite 4.2.1 Reference Guide or “System Files and Startup Files” in Solaris Volume Manager Administration Guide for more information about the md.conf file.