StorageTek Automated Cartridge System Library Software High Availability 8.3 Cluster Installation, Configuration, and Operation Release 8.3 E51939-02 |
|
![]() Previous |
![]() Next |
Solaris 11 is based on a ZFS file system. Disk I/O, disk partitioning, and disk mirroring (or RAID) are handled entirely by ZFS. Consequently, there should be no need to partition the disk (as was typically done with UFS file systems). The whole system disk should be presented as a single partition.
Your storage array is already configured with RAID, so it is not essential to configure an additional level of RAID using ZFS for your ACSLS file system. ZFS RAID is essential if you are using simple JBOD disks, but additional RAID is optional if you employ a qualified disk array. The examples below will illustrate either approach.
Your Solaris platform should be configured with two physical disk drives. Partition the system disk and its mirror drive for optimal ZFS performance.
On a new system before the operating system installation, you can partition each of the system disk drives so that partition-0 contains most (if not all) of the entire disk space. ZFS operates faster and more reliably if it has access to the whole disk. Ensure that the partition you define for ZFS on the second disk is the same size as that defined on the primary disk.
On a system where Solaris 11.1 is already installed, use format
or fdisk
on the primary system disk to view the size of the root partition. Then, format the second system disk with a partition of equal size. Label the disk when the format is complete.
When the system is up, verify the rpool with the command, zpool
status
.
# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0
Note: Observe the c/t/d/s format of the disk expression on your system. You will follow that identical format when you mirror the drive later in step-4. |
Identify the second system disk and determine its device id.
# echo | format AVAILABLE DISK SELECTIONS: 0. c1t0d0 <FUJITSU-MAY2073RCSUN72G-0501-68.37GB> /pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@0,0 /dev/chassis/SYS/HD0/disk 1. c1t1d0 <FUJITSU-MAY2073RCSUN72G-0501-68.37GB> /pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@1,0 /dev/chassis/SYS/HD1/disk
In this example, the second disk-id is c1t1d0.
Add the second disk to the rpool.
# zpool attach -f rpool c1t0d0s0 c1t1d0s0
Note: Be sure to use the same c/t/d/s format that you observed in step-2 above. |
The system will begin resilvering the mirrored drive, copying the contents of the boot drive to the second drive. This operation takes several minutes and it should not be interrupted by a reboot.
You can monitor the progress using:
zpool status -v
Note 1: Until resilvering is complete, any status display will show the disk to be in a degraded mode. The disk will remain in a degraded state while information is being copied from the primary disk to the mirror
Note 2: If the zpool
attach
fails because the disk is labeled as an EFI disk, then follow the process described on page 220 in the document, Solaris Admin: Devices and File Systems: http://docs.oracle.com/cd/E23824_01/pdf/821-1459.pdf
. This process converts the EFI disk to SMI is as follows:
# format -e (select the drive to serve as the rpool mirror). format> partition partition> print partition> label (specify label type "0") Ready to label? y partition> modify (select "1" All free Hog) Do you wish to continue ... yes Free Hog Partition[6]? (specify partition "0") (Specify a size of "0" to the remaining partitions) Okay to make this current partition table? yes Enter table name: "c1t1d0" Ready to label disk? y partition> quit format> quit
Confirm the mirrored rpool
configuration.
# zpool status pool: rpool state: ONLINE scan: resilvered 6.89G in 0h3m with 0 errors config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0
Repeat this operation on the adjacent node.
The ACSLS file system will reside in a zpool on the external shared storage array. The examples below employ a simple mirrored array (RAID-1) using only two disks. These may be real drives, but are most likely virtual devices presented as discrete drives from the attached storage array.
Your storage array is already configured with RAID, so it is not essential to configure an additional level of RAID using ZFS for your ACSLS file system. ZFS RAID is essential if you are using simple JBOD disks, but additional RAID is optional if you employ a qualified disk array. The examples below will illustrate either approach.
Prepare the shared storage array.
In standard configurations, use a single virtual drive from your disk array. Otherwise, a ZFS RAID mirroring configuration will use two virtual drives of equal size. You can use the admin tool with the disk array or the Solaris format utility to partition the two virtual drives so they are of equal size.
Determine your intended base directory for the ACSLS installation.
ACSLS 8.3 is installable in any file system. The base file system you choose should not already exist in the system rpool
. If it already exists there, you should destroy the existing file system before you create it under the new zpool
.
If you intend to use the default /export/home
base directory for ACSLS, it will be necessary to destroy the /expor
t file system from the default root pool in Solaris-11.
To confirm whether /export/home
is attached to the rpool
, run the command:
# zfs list
To detach /export/home
from rpool, first save any files or directories you want to preserve. Ensure that no users' home directories are currently active in /export/home
. Then use zfs
destroy
to remove everything under /export
:
# zfs destroy -r rpool/export
Repeat this step to detach rpool/export
on the adjacent node.
Use format
to identify the device names of the drives on the attached disk array:
# echo | format AVAILABLE DISK SELECTIONS: 0. c1t0d0 <FUJITSU-MAY2073RCSUN72G-0501-68.37GB> /pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@0,0 /dev/chassis/SYS/HD0/disk 1. c1t1d0 <FUJITSU-MAY2073RCSUN72G-0501-68.37GB> /pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@1,0 /dev/chassis/SYS/HD1/disk 3. c0t600A0B800049EDD600000C9952CAA03Ed0 <SUN-LCSM100_F-50.00GB> /scsi_vhci/disk@g600a0b800049edd600000c9952caa03e 4. c0t600A0B800049EE1A0000832652CAA899d0 <SUN-LCSM100_F-50.00GB> /scsi_vhci/disk@g600a0b800049ee1a0000832652caa899
In this example, there are two system disks and the two virtual disks presented from the disk array having device names beginning with c0t600A...
Create the acslspool.
For standard configurations using a qualified disk array, create the acslspool as follows:
# zpool create -m /export/home acslspool\ /dev/dsk/c0t600A0B800049EDD600000C9952CAA03Ed0
If you choose to add ZFS RAID as suggested in step-1, then create a mirrored configuration as follows:
# zpool create -m /export/home acslspool mirror \ /dev/dsk/c0t600A0B800049EDD600000C9952CAA03Ed0 \ /dev/dsk/c0t600A0B800049EE1A0000832652CAA899d0
Verify the new acslspool.
# zpool status acslspool pool: acslspool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM acslspool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t600A0B800049EDD600000C9952CAA03Ed0 ONLINE 0 0 0 c0t600A0B800049EE1A0000832652CAA899d0 ONLINE 0 0 0
Note: When using a RAID disk array, the mirrored ZFS configuration is optional. |
Create a test file in the new pool and verify.
# cd /export/home # date > test # ls test # cat test Tue Jan 7 11:48:05 MST 2014
Export the pool.
# zpool export acslspool
Log in to the adjacent node (which will be referred to as the new current node).
From the new current node, confirm that /export/home
(or the intended file system for ACSLS) is not mounted anywhere in the root pool.
# zfs list
If the file system exists in the rpool
, then repeat step-2 (above) on this current node.
From the new current node, import the acslspool
and verify that acslspool
is present on this node.
# zpool import acslspool # zpool status pool: acslspool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM acslspool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t600A0B800049EDD600000C9952CAA03Ed0 ONLINE 0 0 0 c0t600A0B800049EE1A0000832652CAA899d0 ONLINE 0 0 0
If zpool import
failed, you can attempt the operation with zpool import -f
.
Note: When using a RAID disk array, the mirrored ZFS configuration is optional. |
Verify the test file is present on the new current node.
# cd /export/home # ls test # cat test Tue Jan 7 11:48:05 MST 2014