JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris 11.1 Administration: Devices and File Systems     Oracle Solaris 11.1 Information Library
search filter icon
search icon

Document Information

Preface

1.  Managing Removable Media (Tasks)

2.  Writing CDs and DVDs (Tasks)

3.  Managing Devices (Tasks)

4.  Dynamically Configuring Devices (Tasks)

5.  Managing USB Devices (Tasks)

6.  Using InfiniBand Devices (Overview/Tasks)

7.  Managing Disks (Overview)

8.  Managing Disk Use (Tasks)

9.  Administering Disks (Tasks)

10.  Setting Up Disks (Tasks)

SPARC: Setting up Disks (Task Map)

SPARC: Setting Up Disks for ZFS File Systems

SPARC: How to Set Up a Disk for a ZFS Root File System

SPARC: Creating a Disk Slice for a ZFS Root File System

SPARC: How to Create a Disk Slice for a ZFS Root File System

SPARC: How to Install Boot Blocks for a ZFS Root File System

SPARC: How to Set Up a Disk for a ZFS Non-Root File System

x86: Setting Up Disks for ZFS File Systems (Task Map)

x86: Setting Up Disks for ZFS File Systems

x86: How to Set Up a Disk for a ZFS Root File System

x86: Preparing a Disk for a ZFS Root File System

How to Recreate the ZFS Root Pool (EFI (GPT))

x86: How to Create a Disk Slice for a ZFS Root File System (VTOC)

x86: How to Replace a ZFS Root Pool Disk (EFI (GPT))

x86: How to Replace a ZFS Root Pool Disk (VTOC)

x86: How to Install Boot Blocks for a ZFS Root File System

x86: How to Set Up a Disk for a ZFS Non-Root File System

x86: Creating and Changing Solaris fdisk Partitions

x86: Guidelines for Creating an fdisk Partition

x86: How to Create a Solaris fdisk Partition

Changing the fdisk Partition Identifier

How to Change the Solaris fdisk Identifier

11.  Configuring Storage Devices With COMSTAR (Tasks)

12.  Configuring and Managing the Oracle Solaris Internet Storage Name Service (iSNS)

13.  The format Utility (Reference)

14.  Managing File Systems (Overview)

15.  Creating and Mounting File Systems (Tasks)

16.  Configuring Additional Swap Space (Tasks)

17.  Copying Files and File Systems (Tasks)

18.  Managing Tape Drives (Tasks)

Index

x86: Setting Up Disks for ZFS File Systems (Task Map)

The following task map identifies the procedures for setting up a ZFS root pool disk for a ZFS root file system on an x86 based system.

Task
Description
For Instructions
1. Set up the disk for a ZFS root file system.
Disk for a ZFS Root File System

Connect the new disk or replace the existing root pool disk and boot from a local or remote Oracle Solaris DVD.

2. Create or change an fdisk partition, if necessary.
The disk must contain a valid Solaris fdisk partition.
3. Recreate the root pool or create an alternate root pool.
Recreate the root pool or alternate root pool, in case of a failure.
4. Install the boot loader if you are replacing a root pool disk by using the zpool replace command.
If you replace a disk that is intended for the root pool by using the zpool replace command, then you must install the boot loader manually so that the system can boot from the replacement disk.
5. Set up a disk for a ZFS non-root file system.
Disk for a ZFS Non-Root File System

Connect the disk.

x86: Setting Up Disks for ZFS File Systems

Although the procedures that describe how to set up a disk and create an fdisk partition can be used with a ZFS file systems, a ZFS file system is not directly mapped to a disk or a disk slice. You must create a ZFS storage pool before creating a ZFS file system. For more information, see Oracle Solaris 11.1 Administration: ZFS File Systems.

The root pool contains the root file system that is used to boot the Oracle Solaris OS. If a root pool disk becomes damaged and the root pool is not mirrored, the system might not boot.

If a root pool disk becomes damaged, you have two ways to recover:

A disk that is used in a non-root pool usually contains space for user or data files. You can attach or add a another disk to a root pool or a non-root pool for more disk space.

Or, you can replace a damaged disk in a pool in the following ways:

In general, setting up a disk on the system depends on the hardware so review your hardware documentation when adding or replacing a disk on your system. If you need to add a disk to an existing controller, then it might just be a matter of inserting the disk in an empty slot, if the system supports hot-plugging. If you need to configure a new controller, see Dynamic Reconfiguration and Hot-Plugging.

x86: How to Set Up a Disk for a ZFS Root File System

Refer to your hardware installation guide for information on replacing a disk.

  1. Disconnect the damaged disk from the system, if necessary.
  2. Connect the replacement disk to the system, and check the disk's physical connections.
  3. Follow the instructions in the following table, depending on whether you are booting from a local Oracle Solaris DVD or a remote Oracle Solaris DVD from the network.
    Boot Type
    Action
    From an Oracle Solaris DVD in a local drive
    1. Make sure the Oracle Solaris DVD is in the drive.

    2. Select the option to boot from the media

    From the network
    3. Select the option to boot from the network.

x86: Preparing a Disk for a ZFS Root File System

Review the following root pool disk requirements:

How to Recreate the ZFS Root Pool (EFI (GPT))

Use the following procedure if you need to recreate the ZFS root pool or if you want to create an alternate root pool. The zpool create command below automatically creates a EFI (GPT) labeled disk with the correct boot information.

  1. Become an administrator.
  2. Identify the disks for the root pool.

    Use the format utility to identify the disks for the root pool.

    # format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
           0. c6t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
              /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@0,0
           1. c6t1d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
              /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@1,0
           2. c6t2d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
              /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@2,0
           3. c6t3d0 <FUJITSU-MAV2073RCSUN72G-0301 cyl 14087 alt 2 hd 24 sec 424>
              /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@3,0
    Specify disk (enter its number): 
  3. Recreate the root pool.
    # zpool create -B rpool mirror c1t0d0 c2t0d0

    If you want to create an alternate root pool, then using syntax similar to the following:

    # zpool create -B rpool2 mirror c1t0d0 c2t0d0
    # beadm create -p rpool2 solaris2
    # beadm activate -p rpool2 solaris2
  4. Restore the root pool snapshots, if necessary.

    For information about complete ZFS root pool recovery, see Chapter 11, Archiving Snapshots and Root Pool Recovery, in Oracle Solaris 11.1 Administration: ZFS File Systems.

x86: How to Create a Disk Slice for a ZFS Root File System (VTOC)

In general, the root pool disk is installed automatically when the system is installed. If you need to replace a root pool disk or attach a new disk as a mirrored root pool disk, see the steps below.

For a full description of fdisk partitions, see x86: Guidelines for Creating an fdisk Partition.

  1. Become an administrator.
  2. Offline and unconfigure the failed disk, if necessary.

    Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:

    # zpool offline rpool c8t1d0s0
    # cfgadm -c unconfigure c8::dsk/c8t1d0
  3. Physically connect the new or replacement disk to the system, if necessary.
    1. Physically remove the failed disk.
    2. Physically insert the replacement disk.
    3. Configure the replacement disk, if necessary. For example:
      # cfgadm -c configure c8::dsk/c28t1d0

      On some hardware, you do not have to reconfigure the replacement disk after it is inserted.

  4. Confirm that the disk is accessible by reviewing the format output.

    For example, the format command shows 4 disks connected to this system.

    # format -e
    AVAILABLE DISK SELECTIONS:
           1. c8t0d0 <Sun-STK RAID INT-V1.0 cyl 17830 alt 2 hd 255 sec 63>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@0,0
           2. c8t1d0 <Sun-STK RAID INT-V1.0-136.61GB>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@1,0
           3. c8t2d0 <Sun-STK RAID INT-V1.0-136.61GB>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@2,0
           4. c8t3d0 <Sun-STK RAID INT-V1.0-136.61GB>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@3,0
  5. Select the disk to be used for the ZFS root pool.
    Specify disk (enter its number): 1
    selecting c8t1d0
    [disk formatted]
    .
    .
    .
    format>
  6. Review the status of the fdisk partition.
    • If the disk has no fdisk partition, you will see a message similar to the following:

      format> fdisk
      No Solaris fdisk partition found.

      If so, go to the next step to create an fdisk partition.

    • If the disk has an EFI fdisk or some other partition type, go to the next step to create a Solaris fdisk partition.

    • If the disk has a Solaris fdisk partition, go to step 9 to create a disk slice for the root pool.

  7. If necessary, create a Solaris fdisk partition by selecting the fdisk option.
    format> fdisk
    No fdisk table exists. The default partition for the disk is:
    
      a 100% "SOLARIS System" partition
    
    Type "y" to accept the default partition,  otherwise type "n" to edit the
     partition table. y
  8. If the disk has an EFI fdisk partition, then you will need to create a Solaris fdisk partition.

    If you print the disk's partition table with the format utility, and you see the partition table refers to the first sector and the size, then this is an EFI partition. You will need to create a Solaris fdisk partition as follows:

    1. Select fdisk from the format options.
      # format -e c8t1d0
      selecting c8t1d0
      [disk formatted]
      format> fdisk
    2. Delete the existing EFI partition by selecting option 3, Delete a partition.
      Enter Selection: 3
      Specify the partition number to delete (or enter 0 to exit): 1
      Are you sure you want to delete partition 1? This will make all files and 
      programs in this partition inaccessible (type "y" or "n"). y
      
      
      Partition 1 has been deleted.
    3. Create a new Solaris partition by selecting option 1, Create a partition.
      Enter Selection: 1
      Select the partition type to create: 1
      Specify the percentage of disk to use for this partition
      (or type "c" to specify the size in cylinders). 100
      Should this become the active partition? If yes, it  will be activated
      each time the computer is reset or turned on.
      Please type "y" or "n". y
      Partition 1 is now the active partition.
    4. Update the disk configuration and exit.
      Enter Selection: 6
      format> 
    5. Display the SMI partition table. If the default partition table is applied, then slice 0 might be 0 in size or it might be too small. See the next step.
      format> partition
      partition> print
  9. Confirm that the disk has an SMI label by displaying the partition (slice) information and review the slice 0 size information.

    Set the free hog partition so that all the unallocated disk space is collected in slice 0. Then, press return through the slice size fields to create one large slice 0.

    partition> modify
    Select partitioning base:
            0. Current partition table (default)
            1. All Free Hog
    Choose base (enter number) [0]? 1
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       0                0         (0/0/0)             0
      1       swap    wu       0                0         (0/0/0)             0
      2     backup    wu       0 - 17829      136.58GB    (17830/0/0) 286438950
      3 unassigned    wm       0                0         (0/0/0)             0
      4 unassigned    wm       0                0         (0/0/0)             0
      5 unassigned    wm       0                0         (0/0/0)             0
      6        usr    wm       0                0         (0/0/0)             0
      7 unassigned    wm       0                0         (0/0/0)             0
      8       boot    wu       0 -     0        7.84MB    (1/0/0)         16065
      9 alternates    wm       0                0         (0/0/0)             0
    
    Do you wish to continue creating a new partition
    table based on above table[yes]? 
    Free Hog partition[6]? 0
    Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '5' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '6' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '7' [0b, 0c, 0.00mb, 0.00gb]: 
    
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       1 - 17829      136.58GB    (17829/0/0) 286422885
      1       swap    wu       0                0         (0/0/0)             0
      2     backup    wu       0 - 17829      136.58GB    (17830/0/0) 286438950
      3 unassigned    wm       0                0         (0/0/0)             0
      4 unassigned    wm       0                0         (0/0/0)             0
      5 unassigned    wm       0                0         (0/0/0)             0
      6        usr    wm       0                0         (0/0/0)             0
      7 unassigned    wm       0                0         (0/0/0)             0
      8       boot    wu       0 -     0        7.84MB    (1/0/0)         16065
      9 alternates    wm       0                0         (0/0/0)             0
    Do you wish to continue creating a new partition
    table based on above table[yes]? yes
    Enter table name (remember quotes): "c8t0d0"
    
    Ready to label disk, continue? yes
  10. Let ZFS know that the failed disk is replaced.
    # zpool replace rpool c8t1d0s0
    # zpool online rpool c8t1d0s0

    On some hardware, you do not have to online the replacement disk after it is inserted.

    If you are attaching a new disk to create a mirrored root pool or attaching a larger disk to replace a smaller disk, use syntax similar to the following:

    # zpool attach rpool c8t0d0s0 c8t1d0s0

    A zpool attach operation on a root pool disk automatically applies the boot blocks.

  11. If a root pool disk is replaced with a new disk, apply the boot blocks.

    For example:

    # bootadm install-bootloader

    A zpool replace operation does not automatically apply the boot blocks.

  12. Verify that you can boot from the new disk.
  13. If the system boots from the new disk, detach the old disk.

    This step is only necessary if you attach a new disk to replace a failed disk or a smaller disk.

    # zpool detach rpool c8t0d0s0
  14. Set up the system to boot automatically from the new disk by reconfiguring the system's BIOS.

x86: How to Replace a ZFS Root Pool Disk (EFI (GPT))

In general, the root pool disk is installed automatically when the system is installed. If you need to replace a root pool disk or attach a new disk as a mirrored root pool disk, see the steps below.

In Oracle Solaris 11.1, in most cases, an EFI (GPT) disk label is installed on the root pool disk.

For a full description of fdisk partitions, see x86: Guidelines for Creating an fdisk Partition.

  1. Become an administrator.
  2. Offline and unconfigure the failed disk, if necessary.

    Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:

    # zpool offline rpool c8t1d0
    # cfgadm -c unconfigure c8::dsk/c8t1d0
  3. Physically connect the new or replacement disk to the system, if necessary.
    1. Physically remove the failed disk.
    2. Physically insert the replacement disk.
    3. Configure the replacement disk, if necessary. For example:
      # cfgadm -c configure c8::dsk/c8t1d0

      On some hardware, you do not have to reconfigure the replacement disk after it is inserted.

  4. Confirm that the disk is accessible by reviewing the format output.

    For example, the format command sees 4 disks connected to this system.

    # format -e
    AVAILABLE DISK SELECTIONS:
           1. c8t0d0 <Sun-STK RAID INT-V1.0 cyl 17830 alt 2 hd 255 sec 63>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@0,0
           2. c8t1d0 <Sun-STK RAID INT-V1.0-136.61GB>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@1,0
           3. c8t2d0 <Sun-STK RAID INT-V1.0-136.61GB>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@2,0
           4. c8t3d0 <Sun-STK RAID INT-V1.0-136.61GB>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@3,0
  5. Let ZFS know that the failed disk is replaced.
    # zpool replace rpool c8t1d0
    # zpool online rpool c8t1d0

    On some hardware, you do not have to online the replacement disk after it is inserted.

    If you are attaching a new disk to create a mirrored root pool or attaching a larger disk to replace a smaller disk, use syntax similar to the following:

    # zpool attach rpool c8t0d0 c8t1d0

    A zpool attach operation on a root pool disk applies the boot blocks automatically.

    If your root pool disk contains customized partitions, you might need to use syntax similar to the following:

    # zpool attach rpool c8t0d0s0 c8t0d0
  6. If a root pool disk is replaced with a new disk, apply the boot blocks.

    For example:

    # bootadm install-bootloader

    A zpool replace operation on a root pool disk does not apply the boot blocks automatically.

  7. Verify that you can boot from the new disk.
  8. If the system boots from the new disk, detach the old disk.

    This step is only necessary if you attach a new disk to replace a failed disk or a smaller disk.

    # zpool detach rpool c8t0d0
  9. Set up the system to boot automatically from the new disk by reconfiguring the system's BIOS.

x86: How to Replace a ZFS Root Pool Disk (VTOC)

In general, the root pool disk is installed automatically when the system is installed. If you need to replace a root pool disk or attach a new disk as a mirrored root pool disk, see the steps below.

For a full description of fdisk partitions, see x86: Guidelines for Creating an fdisk Partition.

  1. Become an administrator.
  2. Offline and unconfigure the failed disk, if necessary.

    Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:

    # zpool offline rpool c8t1d0
    # cfgadm -c unconfigure c8::dsk/c8t1d0
  3. Physically connect the new or replacement disk to the system, if necessary.
    1. Physically remove the failed disk.
    2. Physically insert the replacement disk.
    3. Configure the replacement disk, if necessary. For example:
      # cfgadm -c configure c8::dsk/c8t1d0

      On some hardware, you do not have to reconfigure the replacement disk after it is inserted.

  4. Confirm that the disk is accessible by reviewing the format output.

    For example, the format command sees 4 disks connected to this system.

    # format -e
    AVAILABLE DISK SELECTIONS:
           1. c8t0d0 <Sun-STK RAID INT-V1.0 cyl 17830 alt 2 hd 255 sec 63>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@0,0
           2. c8t1d0 <Sun-STK RAID INT-V1.0-136.61GB>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@1,0
           3. c8t2d0 <Sun-STK RAID INT-V1.0-136.61GB>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@2,0
           4. c8t3d0 <Sun-STK RAID INT-V1.0-136.61GB>
              /pci@0,0/pci10de,375@f/pci108e,286@0/disk@3,0
  5. Select the disk to be used for the ZFS root pool.
    Specify disk (enter its number): 1
    selecting c8t1d0
    [disk formatted]
    .
    .
    .
    format>
  6. Review the status of the fdisk partition.
    • If the disk has no fdisk partition, you will see a message similar to the following:

      format> fdisk
      No Solaris fdisk partition found.

      If so, go to step 4 to create an fdisk partition.

    • If the disk has an EFI fdisk or some other partition type, go to the next step to create a Solaris fdisk partition.

    • If the disk has a Solaris fdisk partition, go to step 9 to create a disk slice for the root pool.

  7. If necessary, create a Solaris fdisk partition by selecting the fdisk option.
    format> fdisk
    No fdisk table exists. The default partition for the disk is:
    
      a 100% "SOLARIS System" partition
    
    Type "y" to accept the default partition,  otherwise type "n" to edit the
     partition table. y
  8. If the disk has an EFI fdisk partition, then you will need to create a Solaris fdisk partition.

    If you print the disk's partition table with the format utility, and you see the partition table refers to the first sector and the size, then this is an EFI partition. You will need to create a Solaris fdisk partition as follows:

    • Select fdisk from the format options.

      # format -e c8t1d0
      selecting c8t1d0
      [disk formatted]
      format> fdisk
    • Delete the existing EFI partition by selecting option 3, Delete a partition.

      Enter Selection: 3
      Specify the partition number to delete (or enter 0 to exit): 1
      Are you sure you want to delete partition 1? This will make all files and 
      programs in this partition inaccessible (type "y" or "n"). y
      
      
      Partition 1 has been deleted.
    • Create a new Solaris partition by selecting option 1, Create a partition.

      Enter Selection: 1
      Select the partition type to create: 1
      Specify the percentage of disk to use for this partition
      (or type "c" to specify the size in cylinders). 100
      Should this become the active partition? If yes, it  will be activated
      each time the computer is reset or turned on.
      Please type "y" or "n". y
      Partition 1 is now the active partition.
    • Update the disk configuration and exit.

      Enter Selection: 6
      format> 
    • Display the SMI partition table. If the default partition table is applied, then slice 0 might be 0 in size or it might be too small. See the next step.

      format> partition
      partition> print
  9. Confirm that the disk has an SMI label by displaying the partition (slice) information and review the slice 0 size information.

    Set the free hog partition so that all the unallocated disk space is collected in slice 0. Then, press return through the slice size fields to create one large slice 0.

    partition> modify
    Select partitioning base:
            0. Current partition table (default)
            1. All Free Hog
    Choose base (enter number) [0]? 1
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       0                0         (0/0/0)             0
      1       swap    wu       0                0         (0/0/0)             0
      2     backup    wu       0 - 17829      136.58GB    (17830/0/0) 286438950
      3 unassigned    wm       0                0         (0/0/0)             0
      4 unassigned    wm       0                0         (0/0/0)             0
      5 unassigned    wm       0                0         (0/0/0)             0
      6        usr    wm       0                0         (0/0/0)             0
      7 unassigned    wm       0                0         (0/0/0)             0
      8       boot    wu       0 -     0        7.84MB    (1/0/0)         16065
      9 alternates    wm       0                0         (0/0/0)             0
    
    Do you wish to continue creating a new partition
    table based on above table[yes]? 
    Free Hog partition[6]? 0
    Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '5' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '6' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '7' [0b, 0c, 0.00mb, 0.00gb]: 
    
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       1 - 17829      136.58GB    (17829/0/0) 286422885
      1       swap    wu       0                0         (0/0/0)             0
      2     backup    wu       0 - 17829      136.58GB    (17830/0/0) 286438950
      3 unassigned    wm       0                0         (0/0/0)             0
      4 unassigned    wm       0                0         (0/0/0)             0
      5 unassigned    wm       0                0         (0/0/0)             0
      6        usr    wm       0                0         (0/0/0)             0
      7 unassigned    wm       0                0         (0/0/0)             0
      8       boot    wu       0 -     0        7.84MB    (1/0/0)         16065
      9 alternates    wm       0                0         (0/0/0)             0
    Do you wish to continue creating a new partition
    table based on above table[yes]? yes
    Enter table name (remember quotes): "c8t1d0"
    
    Ready to label disk, continue? yes
  10. Let ZFS know that the failed disk is replaced.
    # zpool replace rpool c8t1d0s0
    # zpool online rpool c8t1d0s0

    On some hardware, you do not have to online the replacement disk after it is inserted.

    If you are attaching a new disk to create a mirrored root pool or attaching a larger disk to replace a smaller disk, use syntax similar to the following:

    # zpool attach rpool c8t0d0s0 c8t1d0s0

    When using the zpool attach command on a root pool, the boot blocks are applied automatically.

  11. If a root pool disk is replaced with a new disk, apply the boot blocks.

    For example:

    # bootadm install-bootloader
  12. Verify that you can boot from the new disk.
  13. If the system boots from the new disk, detach the old disk.

    This step is only necessary if you attach a new disk to replace a failed disk or a smaller disk.

    # zpool detach rpool c8t1d0s0
  14. Set up the system to boot automatically from the new disk by reconfiguring the system's BIOS.

x86: How to Install Boot Blocks for a ZFS Root File System

If you replace a root pool disk with the zpool replace command, you must install the boot loader. The following procedures works for both VTOC and EFI (GPT) labels.

  1. Become an administrator.
  2. Install the boot blocks on the system disk.
    # bootadm install-bootloader

    If you need to install the boot loader on an alternate root pool, then use the -P (pool) option.

    # bootadm install-bootloader -P rpool2

    If you want to install the GRUB Legacy boot loader, you must first remove all GRUB 2 boot environments from your system and then use the installgrub command. For instructions, see Installing GRUB Legacy on a System That Has GRUB 2 Installed in Booting and Shutting Down Oracle Solaris 11.1 Systems.

  3. Verify that the boot blocks are installed by rebooting the system to run level 3.
    # init 6

x86: How to Set Up a Disk for a ZFS Non-Root File System

If you are setting up a disk to be used with a non-root ZFS file system, the disk is relabeled automatically when the pool is created or when the disk is added to the pool. If a pool is created with whole disks or when a whole disk is added to a ZFS storage pool, an EFI label is applied. For more information about EFI disk labels, see EFI (GPT) Disk Label.

Generally, most modern bus types support hot-plugging. This means you can insert a disk in an empty slot and the system recognizes it. For more information about hot-plugging devices, see Chapter 4, Dynamically Configuring Devices (Tasks).

  1. Become an administrator.

    For more information, see How to Use Your Assigned Administrative Rights in Oracle Solaris 11.1 Administration: Security Services.

  2. Connect the disk to the system and check the disk's physical connections.

    Refer to the disk's hardware installation guide for details.

  3. Offline and unconfigure the failed disk, if necessary.

    Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:

    # zpool offline tank c1t1d0
    # cfgadm -c unconfigure c1::dsk/c1t1d0
    <Physically remove failed disk c1t1d0>
    <Physically insert replacement disk c1t1d0>
    # cfgadm -c configure c1::dsk/c1t1d0

    On some hardware, you do not to reconfigure the replacement disk after it is inserted.

  4. Confirm that the new disk is recognized.

    Review the output of the format utility to see if the disk is listed under AVAILABLE DISK SELECTIONS. Then, quit the format utility.

    # format
  5. Let ZFS know that the failed disk is replaced, if necessary.
    # zpool replace tank c1t1d0
    # zpool online tank c1t1d0

    Confirm that the new disk is resilvering.

    # zpool status tank
  6. Attach a new disk to an existing ZFS storage pool, if necessary.

    For example:

    # zpool attach tank mirror c1t0d0 c2t0d0

    Confirm that the new disk is resilvering.

    # zpool status tank

    For more information, see Chapter 3, Managing Oracle Solaris ZFS Storage Pools, in Oracle Solaris 11.1 Administration: ZFS File Systems.