JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris 11.1 Administration: Devices and File Systems     Oracle Solaris 11.1 Information Library
search filter icon
search icon

Document Information

Preface

1.  Managing Removable Media (Tasks)

2.  Writing CDs and DVDs (Tasks)

3.  Managing Devices (Tasks)

4.  Dynamically Configuring Devices (Tasks)

5.  Managing USB Devices (Tasks)

6.  Using InfiniBand Devices (Overview/Tasks)

7.  Managing Disks (Overview)

8.  Managing Disk Use (Tasks)

9.  Administering Disks (Tasks)

10.  Setting Up Disks (Tasks)

SPARC: Setting up Disks (Task Map)

SPARC: Setting Up Disks for ZFS File Systems

SPARC: How to Set Up a Disk for a ZFS Root File System

SPARC: Creating a Disk Slice for a ZFS Root File System

SPARC: How to Create a Disk Slice for a ZFS Root File System

SPARC: How to Install Boot Blocks for a ZFS Root File System

SPARC: How to Set Up a Disk for a ZFS Non-Root File System

x86: Setting Up Disks for ZFS File Systems (Task Map)

x86: Setting Up Disks for ZFS File Systems

x86: How to Set Up a Disk for a ZFS Root File System

x86: Preparing a Disk for a ZFS Root File System

How to Recreate the ZFS Root Pool (EFI (GPT))

x86: How to Create a Disk Slice for a ZFS Root File System (VTOC)

x86: How to Replace a ZFS Root Pool Disk (EFI (GPT))

x86: How to Replace a ZFS Root Pool Disk (VTOC)

x86: How to Install Boot Blocks for a ZFS Root File System

x86: How to Set Up a Disk for a ZFS Non-Root File System

x86: Creating and Changing Solaris fdisk Partitions

x86: Guidelines for Creating an fdisk Partition

x86: How to Create a Solaris fdisk Partition

Changing the fdisk Partition Identifier

How to Change the Solaris fdisk Identifier

11.  Configuring Storage Devices With COMSTAR (Tasks)

12.  Configuring and Managing the Oracle Solaris Internet Storage Name Service (iSNS)

13.  The format Utility (Reference)

14.  Managing File Systems (Overview)

15.  Creating and Mounting File Systems (Tasks)

16.  Configuring Additional Swap Space (Tasks)

17.  Copying Files and File Systems (Tasks)

18.  Managing Tape Drives (Tasks)

Index

SPARC: Setting up Disks (Task Map)

The following task map identifies the procedures for setting up a ZFS root pool disk for a ZFS root file system or a non-root ZFS pool disk on a SPARC based system.

Task
Description
For Instructions
1. Set up the disk for a ZFS root file system.
Disk for a ZFS Root File System

Connect the new disk or replace the existing root pool disk and boot from a local or remote Oracle Solaris DVD.

2. Install the boot blocks for a ZFS root file system, if necessary.
If you replace a disk that is intended for the root pool by using the zpool replace command, then you must install the boot blocks manually so that the system can boot from the replacement disk.
3. Set up a disk for ZFS non-root file system.
Disk for a ZFS Non-Root File System

Set up a disk for a ZFS non-root file system.

SPARC: Setting Up Disks for ZFS File Systems

Although the procedures that describe how to set up a disk can be used with a ZFS file system, a ZFS file system is not directly mapped to a disk or a disk slice. You must create a ZFS storage pool before creating a ZFS file system. For more information, see Oracle Solaris 11.1 Administration: ZFS File Systems.

The root pool contains the root file system that is used to boot the Oracle Solaris OS. If a root pool disk becomes damaged and the root pool is not mirrored, the system might not boot.

If a root pool disk becomes damaged, you have two ways to recover:

A disk that is used in a non-root pool usually contains space for user or data files. You can attach or add a another disk to a root pool or a non-root pool for more disk space.

Or, you can replace a damaged disk in a pool in the following ways:

In general, setting up a disk on the system depends on the hardware, so review your hardware documentation when adding or replacing a disk on your system. If you need to add a disk to an existing controller, then it might just be a matter of inserting the disk in an empty slot, if the system supports hot-plugging. If you need to configure a new controller, see Dynamic Reconfiguration and Hot-Plugging.

SPARC: How to Set Up a Disk for a ZFS Root File System

Refer to your hardware installation guide for information on replacing a disk.

  1. Disconnect the damaged disk from the system, if necessary.
  2. Connect the replacement disk to the system and check the disk's physical connections, if necessary.
  3. Follow the instructions in the following table, depending on whether you are booting from a local Oracle Solaris DVD or a remote Oracle Solaris DVD from the network.
    Boot Type
    Action
    From an Oracle Solaris DVD in a local drive
    1. Make sure the Oracle Solaris DVD is in the drive.

    2. Boot from the media to single-user mode:

    ok boot cdrom -s

    From the network
    Boot from the network to single-user mode:

    ok boot net:dhcp

    After a few minutes, select option 3 - Shell.

After You Set Up a Disk for a ZFS Root File System ...

After the disk is connected or replaced, you can create a slice and update the disk label. Go to SPARC: How to Create a Disk Slice for a ZFS Root File System.

SPARC: Creating a Disk Slice for a ZFS Root File System

You must create a disk slice for a disk that is intended for a ZFS root pool on SPARC systems that do not have GPT-aware firmware. This is a long-standing boot limitation.

Review the following root pool disk requirements:

SPARC: How to Create a Disk Slice for a ZFS Root File System

In general, the root pool disk is installed automatically when the system is installed. If you need to replace a root pool disk or attach a new disk as a mirrored root pool disk, see the steps that follow.

  1. Become an administrator.
  2. Offline and unconfigure the failed disk, if necessary.

    Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:

    # zpool offline rpool c2t1d0s0
    # cfgadm -c unconfigure c2::dsk/c2t1d0
  3. Physically connect the new or replacement disk to the system, if necessary.
    1. Physically remove the failed disk.
    2. Physically insert the replacement disk.
    3. Configure the replacement disk, if necessary. For example:
      # cfgadm -c configure c2::dsk/c2t1d0

      On some hardware, you do not have to reconfigure the replacement disk after it is inserted.

  4. Confirm that the disk is accessible by reviewing the format output.

    For example, the format command shows 4 disks connected to this system.

    # format -e
    AVAILABLE DISK SELECTIONS:
           0. c2t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
              /pci@1c,600000/scsi@2/sd@0,0
           1. c2t1d0 <SEAGATE-ST336607LSUN36G-0307-33.92GB>
              /pci@1c,600000/scsi@2/sd@1,0
           2. c2t2d0 <SEAGATE-ST336607LSUN36G-0507-33.92GB>
              /pci@1c,600000/scsi@2/sd@2,0
           3. c2t3d0 <SEAGATE-ST336607LSUN36G-0507-33.92GB>
              /pci@1c,600000/scsi@2/sd@3,0
  5. Select the disk to be used for the ZFS root pool.
  6. Confirm that the disk has an SMI label by displaying the partition (slice) information.

    For example, the partition (slice) output for c2t1d0 shows that this disk has an EFI label because it identifies first and last sectors.

    Specify disk (enter its number): 1
    selecting c2t1d0
    [disk formatted]
    format> p
    PARTITION MENU:
            0      - change `0' partition
            1      - change `1' partition
            2      - change `2' partition
            3      - change `3' partition
            4      - change `4' partition
            5      - change `5' partition
            6      - change `6' partition
            expand - expand label to use whole disk
            select - select a predefined table
            modify - modify a predefined partition table
            name   - name the current table
            print  - display the current table
            label  - write partition map and label to the disk
            !<cmd> - execute <cmd>, then return
            quit
    partition> p
    Current partition table (original):
    Total disk sectors available: 71116508 + 16384 (reserved sectors)
    
    Part      Tag    Flag     First Sector        Size        Last Sector
      0        usr    wm               256      33.91GB         71116541    
      1 unassigned    wm                 0          0              0    
      2 unassigned    wm                 0          0              0    
      3 unassigned    wm                 0          0              0    
      4 unassigned    wm                 0          0              0    
      5 unassigned    wm                 0          0              0    
      6 unassigned    wm                 0          0              0    
      8   reserved    wm          71116542       8.00MB         71132925    
    
    partition>
  7. If the disk contains an EFI label, relabel the disk with an SMI label.

    For example, the c2t1d0 disk is relabeled with an SMI label, but the default partition table does not provide an optimal slice configuration.

    partition> label
    [0] SMI Label
    [1] EFI Label
    Specify Label type[1]: 0
    Auto configuration via format.dat[no]? 
    Auto configuration via generic SCSI-2[no]? 
    partition> p
    Current partition table (default):
    Total disk cylinders available: 24620 + 2 (reserved cylinders)
    
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       0 -    90      128.37MB    (91/0/0)      262899
      1       swap    wu      91 -   181      128.37MB    (91/0/0)      262899
      2     backup    wu       0 - 24619       33.92GB    (24620/0/0) 71127180
      3 unassigned    wm       0                0         (0/0/0)            0
      4 unassigned    wm       0                0         (0/0/0)            0
      5 unassigned    wm       0                0         (0/0/0)            0
      6        usr    wm     182 - 24619       33.67GB    (24438/0/0) 70601382
      7 unassigned    wm       0                0         (0/0/0)            0
    
    partition> 
  8. Create an optimal slice configuration for a ZFS root pool disk.

    Set the free hog partition so that all the unallocated disk space is collected in slice 0. Then, press return through the slice size fields to create one large slice 0.

    partition> modify
    Select partitioning base:
            0. Current partition table (default)
            1. All Free Hog
    Choose base (enter number) [0]? 1
    
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       0                0         (0/0/0)            0
      1       swap    wu       0                0         (0/0/0)            0
      2     backup    wu       0 - 24619       33.92GB    (24620/0/0) 71127180
      3 unassigned    wm       0                0         (0/0/0)            0
      4 unassigned    wm       0                0         (0/0/0)            0
      5 unassigned    wm       0                0         (0/0/0)            0
      6        usr    wm       0                0         (0/0/0)            0
      7 unassigned    wm       0                0         (0/0/0)            0
    
    Do you wish to continue creating a new partition
    table based on above table[yes]? 
    Free Hog partition[6]? 0
    Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '5' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '6' [0b, 0c, 0.00mb, 0.00gb]: 
    Enter size of partition '7' [0b, 0c, 0.00mb, 0.00gb]: 
    
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       0 - 24619       33.92GB    (24620/0/0) 71127180
      1       swap    wu       0                0         (0/0/0)            0
      2     backup    wu       0 - 24619       33.92GB    (24620/0/0) 71127180
      3 unassigned    wm       0                0         (0/0/0)            0
      4 unassigned    wm       0                0         (0/0/0)            0
      5 unassigned    wm       0                0         (0/0/0)            0
      6        usr    wm       0                0         (0/0/0)            0
      7 unassigned    wm       0                0         (0/0/0)            0
    
    Okay to make this the current partition table[yes]? 
    Enter table name (remember quotes): "c2t1d0"
    
    Ready to label disk, continue? yes
    partition> quit
    format> quit
  9. Let ZFS know that the failed disk is replaced.
    # zpool replace rpool c2t1d0s0
    # zpool online rpool c2t1d0s0

    On some hardware, you do not have to online the replacement disk after it is inserted.

    If you are attaching a new disk to create a mirrored root pool or attaching a larger disk to replace a smaller disk, use syntax similar to the following:

    # zpool attach rpool c2t0d0s0 c2t1d0s0

    A zpool attach operation on a root pool disk applies the boot blocks automatically.

  10. If a root pool disk is replaced with a new disk, apply the boot blocks after the new or replacement disk is resilvered.

    For example:

    # zpool status rpool
    # bootadm install-bootloader

    A zpool replace operation on a root pool disk does not apply the boot blocks automatically.

  11. Verify that you can boot from the new disk.
  12. If the system boots from the new disk, detach the old disk.

    This step is only necessary if you attach a new disk to replace a failed disk or a smaller disk.

    # zpool detach rpool c2t0d0s0
  13. Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the SPARC boot PROM.

SPARC: How to Install Boot Blocks for a ZFS Root File System

  1. Become an administrator.
  2. Install a boot block for a ZFS root file system.
    # bootadm install-bootloader

    For more information, see installboot(1M).

  3. Verify that the boot blocks are installed by rebooting the system to run level 3.
    # init 6

Example 10-1 SPARC: Installing Boot Blocks for a ZFS Root File System

If you physically replace the disk that is intended for the root pool and the Oracle Solaris OS is then reinstalled, or you attach a new disk for the root pool, the boot blocks are installed automatically. If you replace a disk that is intended for the root pool by using the zpool replace command, then you must install the boot blocks manually so that the system can boot from the replacement disk.

The following example shows how to install boot blocks for a ZFS root file system.

# bootadm install-bootloader

SPARC: How to Set Up a Disk for a ZFS Non-Root File System

If you are setting up a disk to be used with a non-root ZFS file system, the disk is relabeled automatically when the pool is created or when the disk is added to the pool. If a pool is created with whole disks or when a whole disk is added to a ZFS storage pool, an EFI label is applied. For more information about EFI disk labels, see EFI (GPT) Disk Label.

Generally, most modern bus types support hot-plugging. This means you can insert a disk in an empty slot and the system recognizes it. For more information about hot-plugging devices, see Chapter 4, Dynamically Configuring Devices (Tasks).

  1. Become an administrator.
  2. Connect the disk to the system and check the disk's physical connections.

    Refer to the disk's hardware installation guide for details.

  3. Offline and unconfigure the failed disk, if necessary.

    Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:

    # zpool offline tank c1t1d0
    # cfgadm -c unconfigure c1::dsk/c1t1d0
    <Physically remove failed disk c1t1d0>
    <Physically insert replacement disk c1t1d0>
    # cfgadm -c configure c1::dsk/c1t1d0

    On some hardware, you do not to reconfigure the replacement disk after it is inserted.

  4. Confirm that the new disk is recognized.

    Review the output of the format utility to see if the disk is listed under AVAILABLE DISK SELECTIONS. Then, quit the format utility.

    # format
  5. Let ZFS know that the failed disk is replaced, if necessary.
    # zpool replace tank c1t1d0
    # zpool online tank c1t1d0

    Confirm that the new disk is resilvering.

    # zpool status tank
  6. Attach a new disk to an existing ZFS storage pool, if necessary.

    For example:

    # zpool attach tank mirror c1t0d0 c2t0d0

    Confirm that the new disk is resilvering.

    # zpool status tank

    For more information, see Chapter 3, Managing Oracle Solaris ZFS Storage Pools, in Oracle Solaris 11.1 Administration: ZFS File Systems.