JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
System Administration Guide: Devices and File Systems     Oracle Solaris 10 1/13 Information Library
search filter icon
search icon

Document Information

Preface

1.  Managing Removable Media (Overview/Tasks)

2.  Writing CDs and DVDs (Tasks)

3.  Managing Devices (Tasks)

4.  Dynamically Configuring Devices (Tasks)

5.  Managing USB Devices (Tasks)

6.  Using InfiniBand Devices (Overview/Tasks)

7.  Managing Disks (Overview)

8.  Managing Disk Use (Tasks)

9.  Administering Disks (Tasks)

10.  SPARC: Setting Up Disks (Tasks)

11.  x86: Setting Up Disks (Tasks)

x86: Setting Up Disks for UFS File Systems (Task Map)

x86: Setting Up Disks for UFS File Systems

x86: How to Set Up a Disk for a UFS Root File System

x86: How to Connect a Disk for a UFS File System

x86: How to Create Disk Slices for UFS File Systems

x86: How to Create a UFS File System

x86: How to Install Boot Blocks for a UFS Root File System

x86: Setting Up Disks for ZFS File Systems (Task Map)

x86: Setting Up Disks for ZFS File Systems

x86: How to Set Up a Disk for a ZFS Root File System

x86: Creating a Disk Slice for a ZFS Root File System

x86: How to Create a Disk Slice for a ZFS Root File System

x86: How to Install Boot Blocks for a ZFS Root File System

x86: How to Set Up a Disk for a ZFS File System

Creating and Changing Solaris fdisk Partitions

x86: Guidelines for Creating an fdisk Partition

x86: How to Create a Solaris fdisk Partition

Changing the fdisk Partition Identifier

How to Change the Solaris fdisk Identifier

12.  Configuring Oracle Solaris iSCSI Targets (Tasks)

13.  The format Utility (Reference)

14.  Managing File Systems (Overview)

15.  Creating and Mounting File Systems (Tasks)

16.  Configuring Additional Swap Space (Tasks)

17.  Checking UFS File System Consistency (Tasks)

18.  UFS File System (Reference)

19.  Backing Up and Restoring UFS File Systems (Overview/Tasks)

20.  Using UFS Snapshots (Tasks)

21.  Copying Files and File Systems (Tasks)

22.  Managing Tape Drives (Tasks)

23.  UFS Backup and Restore Commands (Reference)

Index

x86: Setting Up Disks for ZFS File Systems (Task Map)

The following task map identifies the procedures for setting up a ZFS root pool disk for a ZFS root file system on an x86 based system.

Task
Description
For Instructions
1. Set up the disk for a ZFS root file system.
Disk for a ZFS Root File System

Connect the new disk or replace the existing root pool disk and boot from a local or remote Oracle Solaris DVD.

2. Create or change an fdisk partition, if necessary.
The disk must contain a valid Solaris fdisk partition.
3. Create a disk slice for the ZFS root file system.
Create a disk slice for a disk that is intended for a ZFS root pool. This is a long-standing boot limitation.
4. Install the boot blocks for a ZFS root file system.
If you replace a disk that is intended for the root pool by using the zpool replace command, then you must install the boot blocks manually so that the system can boot from the replacement disk.
5. Set up a disk for a ZFS file system.
Disk for a ZFS File System

Connect the disk.

x86: Setting Up Disks for ZFS File Systems

Although the procedures that describe how to set up a disk and create an fdisk partition can be used with a ZFS file systems, a ZFS file system is not directly mapped to a disk or a disk slice. You must create a ZFS storage pool before creating a ZFS file system. For more information, see Oracle Solaris ZFS Administration Guide.

The root pool contains the root file system that is used to boot the Oracle Solaris OS. If a root pool disk becomes damaged and the root pool is not mirrored, the system might not boot. If a root pool disk becomes damaged, you have two ways to recover:

A disk that is used in a non-root pool usually contains space for user or data files. You can attach or add a another disk to a root pool or a non-root pool for more disk space. Or, you can replace a damaged disk in a pool in the following ways:

In general, setting up a disk to the system depends on the hardware so review your hardware documentation when adding or replacing a disk on your system. If you need to add a disk to an existing controller, then it might just be a matter of inserting the disk in an empty slot, if the system supports hot-plugging. If you need to configure a new controller, see Dynamic Reconfiguration and Hot-Plugging.

x86: How to Set Up a Disk for a ZFS Root File System

Refer to your hardware installation guide for information on replacing a disk.

  1. Disconnect the damaged disk from the system, if necessary.
  2. Connect the replacement disk to the system, and check the disk's physical connections.
  3. Follow the instructions in the following table, depending on whether you are booting from a local Oracle Solaris DVD or a remote Oracle Solaris DVD from the network.
    Boot Type
    Action
    From an Oracle Solaris DVD in a local drive
    1. Make sure the Oracle Solaris DVD is in the drive.

    2. Select the option to boot from the media.

    From the network
    3. Select the option to boot from the network.
After You Set Up a Disk for a ZFS Root File System ...

After the disk is connected or replaced, create an fdisk partition.. Go to x86: How to Create a Solaris fdisk Partition.

x86: Creating a Disk Slice for a ZFS Root File System

You must create a disk slice for a disk that is intended for a ZFS root pool. This is a long-standing boot limitation. Review the following root pool disk requirements:

On an x86 based system, you must first create an fdisk partition. Then, create a disk slice with the bulk of disk space in slice 0.

Attempting to use different slices on a disk and share that disk among different operating systems or with a different ZFS storage pool or storage pool components is not recommended.

x86: How to Create a Disk Slice for a ZFS Root File System

In general, the root pool disk is installed automatically when the system is installed. If you need to replace a root pool disk or attach a new disk as a mirrored root pool disk, see the steps below.

For a full description of fdisk partitions, see x86: Guidelines for Creating an fdisk Partition.

  1. Become superuser.
  2. Offline and unconfigure the failed disk, if necessary.

    Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:

    # zpool offline rpool c2t1d0s0
    # cfgadm -c unconfigure c2::dsk/c2t1d0
  3. Physically connect the new or replacement disk to the system, if necessary.
    1. Physically remove the failed disk.
    2. Physically insert the replacement disk.
    3. Configure the replacement disk, if necessary. For example:
      # cfgadm -c configure c2::dsk/c2t1d0

      On some hardware, you do not have to reconfigure the replacement disk after it is inserted.

  4. Confirm that the disk is accessible by reviewing the format output.

    For example, the format command sees 4 disks connected to this system.

    # format -e
    AVAILABLE DISK SELECTIONS:
           1. c8t0d0 <Sun-STK RAID INT-V1.0 cyl 17830 alt 2 hd 255 sec 63>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@0,0
           2. c8t1d0 <Sun-STK RAID INT-V1.0-136.61GB>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@1,0
           3. c8t2d0 <Sun-STK RAID INT-V1.0-136.61GB>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@2,0
           4. c8t3d0 <Sun-STK RAID INT-V1.0-136.61GB>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@3,0
  5. Select the disk to be used for the ZFS root pool.
    Specify disk (enter its number): 1
    selecting c8t0d0
    [disk formatted]
    .
    .
    .
    format>
  6. Review the status of the fdisk partition.
    • If the disk has no fdisk partition, you will see a message similar to the following:

      format> fdisk
      No Solaris fdisk partition found.

      If so, go to step 4 to create an fdisk partition.

    • If the disk has an EFI fdisk or some other partition type, go to step 5 to create a Solaris fdisk partition.

    • If the disk has a Solaris fdisk partition, go to step 6 to create a disk slice for the root pool.

  7. If necessary, create a Solaris fdisk partition by selecting the fdisk option.
    format> fdisk
    No fdisk table exists. The default partition for the disk is:
    
      a 100% "SOLARIS System" partition
    
    Type "y" to accept the default partition,  otherwise type "n" to edit the
     partition table. y

    Then, go to step 6 to create a disk slice for the root pool.

  8. If the disk has an EFI fdisk partition, then you will need to create a Solaris fdisk partition.

    If you print the disk's partition table with the format utility, and you see the partition table refers to the first sector and the size, then this is an EFI partition. You will need to create a Solaris fdisk partition as follows:

    • Select fdisk from the format options.

      # format -e c8t0d0
      selecting c8t0d0
      [disk formatted]
      format> fdisk
    • Delete the existing EFI partition by selecting option 3, Delete a partition.

      Enter Selection: 3
      Specify the partition number to delete (or enter 0 to exit): 1
      Are you sure you want to delete partition 1? This will make all files and 
      programs in this partition inaccessible (type "y" or "n"). y
      
      
      Partition 1 has been deleted.
    • Create a new Solaris partition by selecting option 1, Create a partition.

      Enter Selection: 1
      Select the partition type to create: 1
      Specify the percentage of disk to use for this partition
      (or type "c" to specify the size in cylinders). 100
      Should this become the active partition? If yes, it  will be activated
      each time the computer is reset or turned on.
      Please type "y" or "n". y
      Partition 1 is now the active partition.
    • Update the disk configuration and exit.

      Enter Selection: 6
      format> 
    • Display the SMI partition table. If the default partition table is applied, then slice 0 might be 0 in size or it might be too small. See the next step.

      format> partition
      partition> print
  9. Confirm that the disk has an SMI label by printing the partition (slice) information and review the slice 0 size information.

    Set the free hog partition so that all the unallocated disk space is collected in slice 0. Then, press return through the slice size fields to create one large slice 0.

    partition> modify
    Select partitioning base:
            0. Current partition table (default)
            1. All Free Hog
    Choose base (enter number) [0]? 1
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       0                0         (0/0/0)             0
      1       swap    wu       0                0         (0/0/0)             0
      2     backup    wu       0 - 17829      136.58GB    (17830/0/0) 286438950
      3 unassigned    wm       0                0         (0/0/0)             0
      4 unassigned    wm       0                0         (0/0/0)             0
      5 unassigned    wm       0                0         (0/0/0)             0
      6        usr    wm       0                0         (0/0/0)             0
      7 unassigned    wm       0                0         (0/0/0)             0
      8       boot    wu       0 -     0        7.84MB    (1/0/0)         16065
      9 alternates    wm       0                0         (0/0/0)             0
    
    Do you wish to continue creating a new partition
    table based on above table[yes]? 
    Free Hog partition[6]? 0
    Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '5' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '6' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '7' [0b, 0c, 0.00mb, 0.00gb]: 
    
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       1 - 17829      136.58GB    (17829/0/0) 286422885
      1       swap    wu       0                0         (0/0/0)             0
      2     backup    wu       0 - 17829      136.58GB    (17830/0/0) 286438950
      3 unassigned    wm       0                0         (0/0/0)             0
      4 unassigned    wm       0                0         (0/0/0)             0
      5 unassigned    wm       0                0         (0/0/0)             0
      6        usr    wm       0                0         (0/0/0)             0
      7 unassigned    wm       0                0         (0/0/0)             0
      8       boot    wu       0 -     0        7.84MB    (1/0/0)         16065
      9 alternates    wm       0                0         (0/0/0)             0
    Do you wish to continue creating a new partition
    table based on above table[yes]? yes
    Enter table name (remember quotes): "c8t0d0"
    
    Ready to label disk, continue? yes
  10. Let ZFS know that the failed disk is replaced.
    # zpool replace rpool c2t1d0s0
    # zpool online rpool c2t1d0s0

    On some hardware, you do not have to online the replacement disk after it is inserted.

    If you are attaching a new disk to create a mirrored root pool or attaching a larger disk to replace a smaller disk, use syntax similar to the following:

    # zpool attach rpool c0t0d0s0 c1t0d0s0
  11. If a root pool disk is replaced with a new disk, apply the boot blocks.

    For example:

    # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c8t0d0s0
  12. Verify that you can boot from the new disk.
  13. If the system boots from the new disk, detach the old disk.

    This step is only necessary if you attach a new disk to replace a failed disk or a smaller disk.

    # zpool detach rpool c0t0d0s0
  14. Set up the system to boot automatically from the new disk by reconfiguring the system's BIOS.
After You Have Created a Disk Slice for the ZFS Root File System ...

After you have created a disk slice for the ZFS root file system and you need to restore root pool snapshots to recover your root pool, see Recovering the ZFS Root Pool or Root Pool Snapshots in Oracle Solaris ZFS Administration Guide.

x86: How to Install Boot Blocks for a ZFS Root File System

  1. Become superuser.
  2. Install the boot blocks on the system disk.
    # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/cwtxdysz

    For more information, see installgrub(1M).

  3. Verify that the boot blocks are installed by rebooting the system to run level 3.
    # init 6

Example 11-2 x86: Installing Boot Blocks for a ZFS Root File System

If you physically replace the disk that is intended for the root pool and the Oracle Solaris OS is then reinstalled, or you attach a new disk for the root pool, the boot blocks are installed automatically. If you replace a disk that is intended for the root pool by using the zpool replace command, then you must install the boot blocks manually so that the system can boot from the replacement disk.

The following example shows how to install the boot blocks for a ZFS root file system.

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1d0s0
stage2 written to partition 0, 277 sectors starting at 50 (abs 16115)
stage1 written to partition 0 sector 0 (abs 16065)

x86: How to Set Up a Disk for a ZFS File System

If you are setting up a disk to be used with a non-root ZFS file system, the disk is relabeled automatically when the pool is created or when the disk is added to the pool. If a pool is created with whole disks or when a whole disk is added to a ZFS storage pool, an EFI label is applied. For more information about EFI disk labels, see EFI (GPT) Disk Label.

Generally, most modern bus types support hot-plugging. This means you can insert a disk in an empty slot and the system recognizes it. For more information about hot-plugging devices, see Chapter 4, Dynamically Configuring Devices (Tasks).

  1. Become superuser.
  2. Connect the disk to the system and check the disk's physical connections.

    Refer to the disk's hardware installation guide for details.

  3. Offline and unconfigure the failed disk, if necessary.

    Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:

    # zpool offline tank c1t1d0
    # cfgadm -c unconfigure c1::dsk/c1t1d0
    <Physically remove failed disk c1t1d0>
    <Physically insert replacement disk c1t1d0>
    # cfgadm -c configure c1::dsk/c1t1d0

    On some hardware, you do not to reconfigure the replacement disk after it is inserted.

  4. Confirm that the new disk is recognized.

    Review the output of the format utility to see if the disk is listed under AVAILABLE DISK SELECTIONS. Then, quit the format utility.

    # format
  5. Let ZFS know that the failed disk is replaced, if necessary.
    # zpool replace tank c1t1d0
    # zpool online tank c1t1d0

    Confirm that the new disk is resilvering.

    # zpool status tank
  6. Attach a new disk to an existing ZFS storage pool, if necessary.

    For example:

    # zpool attach tank mirror c1t0d0 c2t0d0

    Confirm that the new disk is resilvering.

    # zpool status tank

    For more information, see Chapter 3, Managing Oracle Solaris ZFS Storage Pools, in Oracle Solaris ZFS Administration Guide.