Solaris 11.2 is based on a ZFS file system. Disk I/O, disk partitioning, and disk mirroring (or RAID) are handled entirely by ZFS. Consequently, there should be no need to partition the disk (as was typically done with UFS file systems). The whole system disk should be presented as a single partition.
Your storage array is already configured with RAID, so it is not essential to configure an additional level of RAID using ZFS for your ACSLS file system. ZFS RAID is essential if you are using simple JBOD disks, but additional RAID is optional if you employ a qualified disk array. The examples below will illustrate either approach.
Your Solaris platform should be configured with two physical disk drives. Partition the system disk and its mirror drive for optimal ZFS performance.
On a new system before the operating system installation, each of the system disk drives can be partitioned so that partition-0 contains most (if not all) of the entire disk space. ZFS operates faster and more reliably if it has access to the whole disk. Ensure that the partition defined for ZFS on the second disk is the same size as that defined on the primary disk.
On a system where Solaris 11.2 is already installed, use format
or fdisk
on the primary system disk to view the size of the root
partition. Format the second system disk with a partition of equal size. Label the disk when the format is complete.
When the system is up, verify the rpool
with the command, zpool
status
.
# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c0t5000C5000EA48903d0s0 ONLINE 0 0 0
Identify the second system disk and determine its device-id.
# echo | format AVAILABLE DISK SELECTIONS: 0. c0t5000C5000EA48893d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /scsi_vhci/disk@g5000c5000ea48893 1. c0t5000C5000EA48903d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /scsi_vhci/disk@g5000c5000ea48903
Choose the alternate device having close to the same size as the device revealed in step 2. In this example, the second disk-id is c0t5000C5000EA48893d0s
Add the second disk to the rpool
.
# zpool attach -f rpool \ c0t5000C5000EA48903d0 \ c0t5000C5000EA48893d0
The system begins resilvering the mirrored drive, copying the contents of the boot drive to the second drive. This operation takes several minutes and should not be interrupted by a reboot.
You can monitor the progress using:
zpool status -v
Note 1: Until resilvering is complete, any status display shows the disk to be in a degraded mode. The disk remains in a degraded state while information is being copied from the primary disk to the mirror
Note 2: If the zpool
attach
fails because the disk is labeled as an EFI disk, then follow the process described in the document, Solaris Admin: Devices and File Systems: http://docs.oracle.com/cd/E23824_01/pdf/821-1459.pdf
. This process converts the EFI disk to SMI is as follows:
# format -e (select the drive to serve as the rpool mirror). format> partition partition> print partition> label (specify label type "0") Ready to label? y partition> modify (select "1" All free Hog) Do you wish to continue ... yes Free Hog Partition[6]? (specify partition "0") (Specify a size of "0" to the remaining partitions) Okay to make this current partition table? yes Enter table name: "c1t1d0" Ready to label disk? y partition> quit format> quit
Confirm the mirrored rpool
configuration.
# zpool status pool: rpool state: ONLINE scan: resilvered 6.89G in 0h3m with 0 errors config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C5000EA48903d0 ONLINE 0 0 0 c0t5000C5000EA48893d0 ONLINE 0 0 0
Repeat this operation on the adjacent node.
The ACSLS file system resides in a zpool
on the external shared storage array. The examples below employ a simple mirrored array (RAID 1) using only two disks. These may be real drives, but are most likely virtual devices presented as discrete drives from the attached storage array.
The storage array is already configured with RAID, so it is not essential to configure an additional level of RAID using ZFS for your ACSLS file system. ZFS RAID is essential if using simple JBOD disks, but additional RAID is optional if employing a qualified disk array. The examples below illustrate either approach.
Prepare the shared storage array.
In standard configurations, use a single virtual drive from the disk array. Otherwise, a ZFS RAID mirroring configuration uses two virtual drives of equal size. The admin tool can be used with the disk array or the Solaris format utility to partition the two virtual drives so they are of equal size.
Determine the intended base directory for the ACSLS installation.
ACSLS 8.4 is installable in any file system. The base file system chosen should not already exist in the system rpool
. If it already exists, the existing file system should be destroyed before creating it under the new zpool
.
If the default /export/home
base directory is used for ACSLS, it is necessary to destroy the /expor
t file system from the default root
pool in Solaris 11.2.
To confirm whether /export/home
is attached to the rpool
, run the command:
# zfs list
To detach /export/home
from rpool
, first save any files or directories to be preserved. Ensure that no users' home directories are currently active in /export/home
. Then use zfs
destroy
to remove everything under /export
:
# zfs destroy -r rpool/export
Repeat this step to detach rpool/export
on the adjacent node.
Use format
to identify the device names of the drives on the attached disk array:
# echo | format AVAILABLE DISK SELECTIONS: 0. c0t5000C5000EA48893d0 <FUJITSU-MAY2073RCSUN72G-0501-68.37GB> /pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@0,0 /dev/chassis/SYS/HD0/disk 1. c0t5000C5000EA48893d0 <FUJITSU-MAY2073RCSUN72G-0501-68.37GB> /pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@1,0 /dev/chassis/SYS/HD1/disk 3. c0t600A0B800049EDD600000C9952CAA03Ed0 <SUN-LCSM100_F-50.00GB> /scsi_vhci/disk@g600a0b800049edd600000c9952caa03e 4. c0t600A0B800049EE1A0000832652CAA899d0 <SUN-LCSM100_F-50.00GB> /scsi_vhci/disk@g600a0b800049ee1a0000832652caa899
In this example, there are two system disks and the two virtual disks presented from the disk array having device names beginning with c0t600A...
Create the acslspool
.
For standard configurations using a qualified disk array, create the acslspool
as follows:
# zpool create -m /export/home acslspool\ /dev/dsk/c0t600A0B800049EDD600000C9952CAA03Ed0
If ZFS RAID is added as suggested in step 1, create a mirrored configuration as follows:
# zpool create -m /export/home acslspool mirror \ /dev/dsk/c0t600A0B800049EDD600000C9952CAA03Ed0 \ /dev/dsk/c0t600A0B800049EE1A0000832652CAA899d0
Verify the new acslspool
.
# zpool status acslspool pool: acslspool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM acslspool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t600A0B800049EDD600000C9952CAA03Ed0 ONLINE 0 0 0 c0t600A0B800049EE1A0000832652CAA899d0 ONLINE 0 0 0
Note:
When using a RAID disk array, the mirrored ZFS configuration is optional.Create a test file in the new pool and verify.
# cd /export/home # date > test # ls test # cat test Tue Jan 7 11:48:05 MST 2015
Export the pool.
# cd / # zpool export acslspool
Log in to the adjacent node (which is referred to as the new current node).
From the new current node, confirm that /export/home
(or the intended file system for ACSLS) is not mounted anywhere in the root
pool.
# zfs list
If the file system exists in the rpool
, repeat step 2 (above) on this current node.
From the new current node, import the acslspool
and verify that acslspool
is present on this node.
# zpool import acslspool # zpool status pool: acslspool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM acslspool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t600A0B800049EDD600000C9952CAA03Ed0 ONLINE 0 0 0 c0t600A0B800049EE1A0000832652CAA899d0 ONLINE 0 0 0
If zpool import
failed, you can attempt the operation with zpool import -f
.
Note:
When using a RAID disk array, the mirrored ZFS configuration is optional.Verify the test file is present on the new current node.
# cd /export/home # ls test # cat test Tue Jan 7 11:48:05 MST 2015