Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Administration: Devices and File Systems Oracle Solaris 11 Information Library |
1. Managing Removable Media (Overview)
2. Managing Removable Media (Tasks)
3. Accessing Removable Media (Tasks)
4. Writing CDs and DVDs (Tasks)
5. Managing Devices (Overview/Tasks)
6. Dynamically Configuring Devices (Tasks)
7. Using USB Devices (Overview)
9. Using InfiniBand Devices (Overview/Tasks)
11. Administering Disks (Tasks)
12. SPARC: Setting Up Disks (Tasks)
13. x86: Setting Up Disks (Tasks)
x86: Setting Up Disks for ZFS File Systems (Task Map)
x86: Setting Up Disks for ZFS File Systems
x86: How to Set Up a Disk for a ZFS Root File System
x86: Creating a Disk Slice for a ZFS Root File System
x86: How to Create a Disk Slice for a ZFS Root File System
Creating and Changing Solaris fdisk Partitions
x86: Guidelines for Creating an fdisk Partition
x86: How to Create a Solaris fdisk Partition
Changing the fdisk Partition Identifier
How to Change the Solaris fdisk Identifier
14. Configuring Storage Devices With COMSTAR
15. Configuring and Managing the Oracle Solaris Internet Storage Name Service (iSNS)
16. The format Utility (Reference)
17. Managing File Systems (Overview)
18. Creating and Mounting File Systems (Tasks)
19. Configuring Additional Swap Space (Tasks)
20. Copying Files and File Systems (Tasks)
The following task map identifies the procedures for setting up a ZFS root pool disk for a ZFS root file system on an x86 based system.
|
Although the procedures that describe how to set up a disk and create an fdisk partition can be used with a ZFS file systems, a ZFS file system is not directly mapped to a disk or a disk slice. You must create a ZFS storage pool before creating a ZFS file system. For more information, see Oracle Solaris Administration: ZFS File Systems.
The root pool contains the root file system that is used to boot the Oracle Solaris OS. If a root pool disk becomes damaged and the root pool is not mirrored, the system might not boot. If a root pool disk becomes damaged, you have two ways to recover:
You can reinstall the entire Oracle Solaris OS.
Or, you can replace the root pool disk and restore your file systems from snapshots or from a backup medium. You can reduce system down time due to hardware failures by creating a redundant root pool. The only supported redundant root pool configuration is a mirrored root pool.
A disk that is used in a non-root pool usually contains space for user or data files. You can attach or add a another disk to a root pool or a non-root pool for more disk space. Or, you can replace a damaged disk in a pool in the following ways:
A disk can be replaced in a non-redundant pool if all the devices are currently ONLINE.
A disk can be replaced in a redundant pool if enough redundancy exists among the other devices.
In a mirrored root pool, you can replace a disk or attach a disk and then detach the failed disk or a smaller disk to increase a pool's size.
In general, setting up a disk to the system depends on the hardware so review your hardware documentation when adding or replacing a disk on your system. If you need to add a disk to an existing controller, then it might just be a matter of inserting the disk in an empty slot, if the system supports hot-plugging. If you need to configure a new controller, see Dynamic Reconfiguration and Hot-Plugging.
Refer to your hardware installation guide for information on replacing a disk.
|
After the disk is connected or replaced, create an fdisk partition.. Go to x86: How to Create a Solaris fdisk Partition.
You must create a disk slice for a disk that is intended for a ZFS root pool. This is a long-standing boot limitation. Review the following root pool disk requirements:
Must contain a disk slice and an SMI (VTOC) label.
An EFI label is not supported for a root pool disk.
A root pool disk on an x86 system must contain an fdisk partition.
Must be a single disk or be part of mirrored configuration. A non-redundant configuration nor a RAIDZ configuration is supported for the root pool.
All subdirectories of the root file system that are part of the OS image, with the exception of /var, must be in the same dataset as the root file system.
All Solaris OS components must reside in the root pool, with the exception of the swap and dump devices.
On an x86 based system, you must first create an fdisk partition. Then, create a disk slice with the bulk of disk space in slice 0.
Attempting to use different slices on a disk and share that disk among different operating systems or with a different ZFS storage pool or storage pool components is not recommended.
In general, the root pool disk is installed automatically when the system is installed. If you need to replace a root pool disk or attach a new disk as a mirrored root pool disk, see the steps below.
For a full description of fdisk partitions, see x86: Guidelines for Creating an fdisk Partition.
Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline rpool c2t1d0s0 # cfgadm -c unconfigure c2::dsk/c2t1d0
# cfgadm -c configure c2::dsk/c2t1d0
On some hardware, you do not have to reconfigure the replacement disk after it is inserted.
For example, the format command sees 4 disks connected to this system.
# format -e AVAILABLE DISK SELECTIONS: 1. c8t0d0 <Sun-STK RAID INT-V1.0 cyl 17830 alt 2 hd 255 sec 63> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@0,0 2. c8t1d0 <Sun-STK RAID INT-V1.0-136.61GB> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@1,0 3. c8t2d0 <Sun-STK RAID INT-V1.0-136.61GB> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@2,0 4. c8t3d0 <Sun-STK RAID INT-V1.0-136.61GB> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@3,0
Specify disk (enter its number): 1 selecting c8t0d0 [disk formatted] . . . format>
If the disk has no fdisk partition, you will see a message similar to the following:
format> fdisk No Solaris fdisk partition found.
If so, go to step 4 to create an fdisk partition.
If the disk has an EFI fdisk or some other partition type, go to step 5 to create a Solaris fdisk partition.
If the disk has a Solaris fdisk partition, go to step 6 to create a disk slice for the root pool.
format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y
Then, go to step 6 to create a disk slice for the root pool.
If you print the disk's partition table with the format utility, and you see the partition table refers to the first sector and the size, then this is an EFI partition. You will need to create a Solaris fdisk partition as follows:
Select fdisk from the format options.
# format -e c8t0d0 selecting c8t0d0 [disk formatted] format> fdisk
Delete the existing EFI partition by selecting option 3, Delete a partition.
Enter Selection: 3 Specify the partition number to delete (or enter 0 to exit): 1 Are you sure you want to delete partition 1? This will make all files and programs in this partition inaccessible (type "y" or "n"). y Partition 1 has been deleted.
Create a new Solaris partition by selecting option 1, Create a partition.
Enter Selection: 1 Select the partition type to create: 1 Specify the percentage of disk to use for this partition (or type "c" to specify the size in cylinders). 100 Should this become the active partition? If yes, it will be activated each time the computer is reset or turned on. Please type "y" or "n". y Partition 1 is now the active partition.
Update the disk configuration and exit.
Enter Selection: 6 format>
Display the SMI partition table. If the default partition table is applied, then slice 0 might be 0 in size or it might be too small. See the next step.
format> partition partition> print
Set the free hog partition so that all the unallocated disk space is collected in slice 0. Then, press return through the slice size fields to create one large slice 0.
partition> modify Select partitioning base: 0. Current partition table (default) 1. All Free Hog Choose base (enter number) [0]? 1 Part Tag Flag Cylinders Size Blocks 0 root wm 0 0 (0/0/0) 0 1 swap wu 0 0 (0/0/0) 0 2 backup wu 0 - 17829 136.58GB (17830/0/0) 286438950 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 alternates wm 0 0 (0/0/0) 0 Do you wish to continue creating a new partition table based on above table[yes]? Free Hog partition[6]? 0 Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '5' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '6' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '7' [0b, 0c, 0.00mb, 0.00gb]: Part Tag Flag Cylinders Size Blocks 0 root wm 1 - 17829 136.58GB (17829/0/0) 286422885 1 swap wu 0 0 (0/0/0) 0 2 backup wu 0 - 17829 136.58GB (17830/0/0) 286438950 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 alternates wm 0 0 (0/0/0) 0 Do you wish to continue creating a new partition table based on above table[yes]? yes Enter table name (remember quotes): "c8t0d0" Ready to label disk, continue? yes
# zpool replace rpool c2t1d0s0 # zpool online rpool c2t1d0s0
On some hardware, you do not have to online the replacement disk after it is inserted.
If you are attaching a new disk to create a mirrored root pool or attaching a larger disk to replace a smaller disk, use syntax similar to the following:
# zpool attach rpool c0t0d0s0 c1t0d0s0
For example:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c8t0d0s0
This step is only necessary if you attach a new disk to replace a failed disk or a smaller disk.
# zpool detach rpool c0t0d0s0
After you have created a disk slice for the ZFS root file system and you need to restore root pool snapshots to recover your root pool, see How to Replace a Disk in a ZFS Root Pool in Oracle Solaris Administration: ZFS File Systems.
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/cwtxdysz
For more information, see installgrub(1M).
# init 6
Example 13-1 x86: Installing Boot Blocks for a ZFS Root File System
If you physically replace the disk that is intended for the root pool and the Oracle Solaris OS is then reinstalled, or you attach a new disk for the root pool, the boot blocks are installed automatically. If you replace a disk that is intended for the root pool by using the zpool replace command, then you must install the boot blocks manually so that the system can boot from the replacement disk.
The following example shows how to install the boot blocks for a ZFS root file system.
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1d0s0 stage2 written to partition 0, 277 sectors starting at 50 (abs 16115) stage1 written to partition 0 sector 0 (abs 16065)
If you are setting up a disk to be used with a non-root ZFS file system, the disk is relabeled automatically when the pool is created or when the disk is added to the pool. If a pool is created with whole disks or when a whole disk is added to a ZFS storage pool, an EFI label is applied. For more information about EFI disk labels, see EFI Disk Label.
Generally, most modern bus types support hot-plugging. This means you can insert a disk in an empty slot and the system recognizes it. For more information about hot-plugging devices, see Chapter 6, Dynamically Configuring Devices (Tasks).
For more information, see How to Use Your Assigned Administrative Rights in Oracle Solaris Administration: Security Services.
Refer to the disk's hardware installation guide for details.
Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline tank c1t1d0 # cfgadm -c unconfigure c1::dsk/c1t1d0 <Physically remove failed disk c1t1d0> <Physically insert replacement disk c1t1d0> # cfgadm -c configure c1::dsk/c1t1d0
On some hardware, you do not to reconfigure the replacement disk after it is inserted.
Review the output of the format utility to see if the disk is listed under AVAILABLE DISK SELECTIONS. Then, quit the format utility.
# format
# zpool replace tank c1t1d0 # zpool online tank c1t1d0
Confirm that the new disk is resilvering.
# zpool status tank
For example:
# zpool attach tank mirror c1t0d0 c2t0d0
Confirm that the new disk is resilvering.
# zpool status tank
For more information, see Chapter 4, Managing Oracle Solaris ZFS Storage Pools, in Oracle Solaris Administration: ZFS File Systems.