Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Software Installation Guide Oracle Solaris Cluster 4.0 |
1. Planning the Oracle Solaris Cluster Configuration
2. Installing Software on Global-Cluster Nodes
3. Establishing the Global Cluster
4. Configuring Solaris Volume Manager Software
Configuring Solaris Volume Manager Software
How to Create State Database Replicas
Creating Disk Sets in a Cluster
How to Add Drives to a Disk Set
Configuring Dual-String Mediators
Requirements for Dual-String Mediators
How to Check For and Fix Bad Mediator Data
5. Creating a Cluster File System
This section describes how to create disk sets for a cluster configuration. When you create a Solaris Volume Manager disk set in an Oracle Solaris Cluster environment, the disk set is automatically registered with the Oracle Solaris Cluster software as a device group of type svm. To create or delete an svm device group, you must use Solaris Volume Manager commands and utilities to create or delete the underlying disk set of the device group.
The following table lists the tasks that you perform to create disk sets. Complete the procedures in the order that is indicated.
Table 4-2 Task Map: Configuring Solaris Volume Manager Disk Sets
|
Before You Begin
The disk set that you intend to create must meet one of the following requirements:
If the disk set is configured with exactly two disk strings, the disk set must connect to exactly two nodes and use two or three mediator hosts. These mediator hosts must include the two hosts attached to the enclosures containing the disk set. See Configuring Dual-String Mediators for details on how to configure dual-string mediators.
If the disk set is configured with more than two disk strings, ensure that for any two disk strings S1 and S2, the sum of the number of drives on those strings exceeds the number of drives on the third string S3. Stated as a formula, the requirement is that count(S1) + count(S2) > count(S3).
You can run this command on all nodes in the cluster at the same time.
phys-schost# cldevice populate
See the cldevice(1CL) man page for more information.
The command executes remotely on all nodes, even though the command is run from just one node. To determine whether the command has completed processing, run the following command on each node of the cluster:
phys-schost# ps -ef | grep scgdevs
For instructions, see How to Create State Database Replicas.
The following command creates the disk set and registers the disk set as an Oracle Solaris Cluster device group.
phys-schost# metaset -s setname -a -h node1 node2
Specifies the disk set name.
Adds (creates) the disk set.
Specifies the name of the primary node to master the disk set.
Specifies the name of the secondary node to master the disk set
Note - When you run the metaset command to configure a Solaris Volume Manager device group on a cluster, the command designates one secondary node by default. You can change the desired number of secondary nodes in the device group by using the clsetup utility after the device group is created. Refer to Administering Device Groups in Oracle Solaris Cluster System Administration Guide for more information about how to change the numsecondaries property.
phys-schost# cldevicegroup sync device-group-name
For more information about data replication, see Chapter 4, Data Replication Approaches, in Oracle Solaris Cluster System Administration Guide.
phys-schost# metaset -s setname
phys-schost# cldevicegroup set -p name=value device-group
Specifies a device-group property.
Specifies the name of a property.
Specifies the value or setting of the property.
Specifies the name of the device group. The device-group name is the same as the disk-set name.
See the cldevicegroup(1CL) for information about device-group properties.
Example 4-2 Creating a Disk Set
The following command creates two disk sets, dg-schost-1 and dg-schost-2, with the nodes phys-schost-1 and phys-schost-2 specified as the potential primaries.
phys-schost# metaset -s dg-schost-1 -a -h phys-schost-1 phys-schost-2 phys-schost# metaset -s dg-schost-2 -a -h phys-schost-1 phys-schost-2
Next Steps
Add drives to the disk set. Go to Adding Drives to a Disk Set.
When you add a drive to a disk set, the volume management software repartitions the drive so that the state database for the disk set can be placed on the drive.
A small portion of each drive is reserved for use by Solaris Volume Manager software. In Extensible Firmware Interface (EFI) labeled devices, slice 6 is used. The remainder of the space on each drive is placed into slice 0.
Drives are repartitioned when they are added to the disk set only if the target slice is not configured correctly.
Any existing data on the drives is lost by the repartitioning.
If the target slice starts at cylinder 0, and the drive partition is large enough to contain a state database replica, the drive is not repartitioned.
Before You Begin
Ensure that the disk set has been created. For instructions, see How to Create a Disk Set.
phys-schost# cldevice show | grep Device
Choose drives that are shared by the cluster nodes that will master or potentially master the disk set.
Use the full DID device name, which has the form /dev/did/rdsk/dN, when you add a drive to a disk set.
In the following example, the entries for DID device /dev/did/rdsk/d3 indicate that the drive is shared by phys-schost-1 and phys-schost-2.
=== DID Device Instances === DID Device Name: /dev/did/rdsk/d1 Full Device Path: phys-schost-1:/dev/rdsk/c0t0d0 DID Device Name: /dev/did/rdsk/d2 Full Device Path: phys-schost-1:/dev/rdsk/c0t6d0 DID Device Name: /dev/did/rdsk/d3 Full Device Path: phys-schost-1:/dev/rdsk/c1t1d0 Full Device Path: phys-schost-2:/dev/rdsk/c1t1d0 …
phys-schost# cldevicegroup switch -n node devicegroup
Specifies the node to take ownership of the device group.
Specifies the device group name, which is the same as the disk set name.
Use the full DID path name.
phys-schost# metaset -s setname -a /dev/did/rdsk/dN
Specifies the disk set name, which is the same as the device group name.
Adds the drive to the disk set.
Note - Do not use the lower-level device name (cNtXdY) when you add a drive to a disk set. Because the lower-level device name is a local name and not unique throughout the cluster, using this name might prevent the metaset from being able to switch over.
phys-schost# metaset -s setname
Example 4-3 Adding Drives to a Disk Set
The metaset command adds the drives /dev/did/rdsk/d1 and /dev/did/rdsk/d2 to the disk set dg-schost-1.
phys-schost# metaset -s dg-schost-1 -a /dev/did/rdsk/d1 /dev/did/rdsk/d2
Next Steps
If you want to repartition drives for use in volumes, go to How to Repartition Drives in a Disk Set.
Otherwise, go to How to Create an md.tab File to find out how to define metadevices or volumes by using an md.tab file.
The metaset(1M) command repartitions drives in a disk set so that a small portion of each drive is reserved for use by Solaris Volume Manager software. In Extensible Firmware Interface (EFI) labeled devices, slice 6 is used. The remainder of the space on each drive is placed into slice 0. To make more effective use of the drive, use this procedure to modify the disk layout. If you allocate space to EFI slices 1 through 5, you can use these slices when you set up Solaris Volume Manager volumes.
When you repartition a drive, take steps to prevent the metaset command from repartitioning the drive.
Do not allow the target slice to overlap any other slice on the drive.
See your Solaris Volume Manager administration guide to determine the size of a state database replica for your version of the volume-manager software.
Do not set this field to read-only.
See the format(1M) man page for details.
Next Steps
Define volumes by using an md.tab file. Go to How to Create an md.tab File.
Create an /etc/lvm/md.tab file on each node in the cluster. Use the md.tab file to define Solaris Volume Manager volumes for the disk sets that you created.
Note - If you are using local volumes, ensure that local volume names are distinct from the device IDs that are used to form disk sets. For example, if the device ID /dev/did/dsk/d3 is used in a disk set, do not use the name /dev/md/dsk/d3 for a local volume. This requirement does not apply to shared volumes, which use the naming convention /dev/md/setname/{r}dsk/d#.
Use the full DID device names in the md.tab file in place of the lower-level device names (cN tXdY). The DID device name takes the form /dev/did/rdsk/dN.
phys-schost# cldevice show | grep Device
=== DID Device Instances === DID Device Name: /dev/did/rdsk/d1 Full Device Path: phys-schost-1:/dev/rdsk/c0t0d0 DID Device Name: /dev/did/rdsk/d2 Full Device Path: phys-schost-1:/dev/rdsk/c0t6d0 DID Device Name: /dev/did/rdsk/d3 Full Device Path: phys-schost-1:/dev/rdsk/c1t1d0 Full Device Path: phys-schost-2:/dev/rdsk/c1t1d0 …
See Example 4-4 for a sample md.tab file.
Note - If you have existing data on the drives that will be used for the submirrors, you must back up the data before volume setup. Then restore the data onto the mirror.
To avoid possible confusion between local volumes on different nodes in a cluster environment, use a naming scheme that makes each local volume name unique throughout the cluster. For example, for node 1 choose names from d100 to d199. For node 2 use d200 to d299.
See your Solaris Volume Manager documentation and the md.tab(4) man page for details about how to create an md.tab file.
Example 4-4 Sample md.tab File
The following sample md.tab file defines the disk set that is named dg-schost-1. The ordering of lines in the md.tab file is not important.
dg-schost-1/d0 -m dg-schost-1/d10 dg-schost-1/d20 dg-schost-1/d10 1 1 /dev/did/rdsk/d1s0 dg-schost-1/d20 1 1 /dev/did/rdsk/d2s0
The sample md.tab file is constructed as follows.
The first line defines the device d0 as a mirror of volumes d10 and d20. The -m signifies that this device is a mirror device.
dg-schost-1/d0 -m dg-schost-1/d0 dg-schost-1/d20
The second line defines volume d10, the first submirror of d0, as a one-way stripe.
dg-schost-1/d10 1 1 /dev/did/rdsk/d1s0
The third line defines volume d20, the second submirror of d0, as a one-way stripe.
dg-schost-1/d20 1 1 /dev/did/rdsk/d2s0
Next Steps
Activate the volumes that are defined in the md.tab files. Go to How to Activate Volumes.
Perform this procedure to activate Solaris Volume Manager volumes that are defined in md.tab files.
phys-schost# cldevicegroup switch -n node device-group
Specifies the node that takes ownership.
Specifies the disk set name.
phys-schost# metainit -s setname -a
Specifies the disk set name.
Activates all volumes in the md.tab file.
If necessary, run the metainit(1M) command from another node that has connectivity to the drives. This step is required for cluster-pair topologies where the drives are not accessible by all nodes.
phys-schost# metastat -s setname
See the metastat(1M) man page for more information.
phys-schost# prtvtoc /dev/rdsk/cNtXdYsZ > filename
Store the file in a location outside the cluster. If you make any disk configuration changes, run this command again to capture the changed configuration. If a disk fails and needs replacement, you can use this information to restore the disk partition configuration. For more information, see the prtvtoc(1M) man page.
An archived backup of your cluster configuration facilitates easier recovery of the your cluster configuration. For more information, see How to Back Up the Cluster Configuration in Oracle Solaris Cluster System Administration Guide.
Example 4-5 Activating Volumes in the md.tab File
In the following example, all volumes that are defined in the md.tab file for disk set dg-schost-1 are activated.
phys-schost# metainit -s dg-schost-1 -a
Next Steps
If your cluster contains disk sets that are configured with exactly two disk enclosures and two nodes, add dual-string mediators. Go to Configuring Dual-String Mediators.
Otherwise, go to How to Create Cluster File Systems to find out how to create a cluster file system.