C H A P T E R 4 |
Disk Administration and Management |
This chapter includes information about the following topics:
The Sun Fire X4500 server can contain up to 48 SATA hard disk drives. The hard disk drive locations are numbered sequentially from 0 to 47, starting at the front left corner and incrementing left to right and front to rear see FIGURE 4-1. The nomenclature for the locations is DISKn, where n is the location number.
FIGURE 4-1 Disk Drive Locations
Each hard disk drive has a sensor that is used to communicate the state for the slot.The hard disk drives use IPMI (Intelligent Platform Management Interface) sensors to convey a slot state see TABLE 4-1:
Inside the Sun Fire X4500 server chassis there are three LEDs for each of the 48 hard disk drives: one for the Activity LED (green), one for the Fault (amber) LED and the third for the "OK to remove" (blue) LED.
The individual LED locator can be used to control both the fault and removal LEDs through an IPMI OEM command. The service processor handles all aspects of the fault and removes LEDs automatically based on events in the disk drive sensors see TABLE 4-2.
Extensible Firmware Interface (EFI) is an Intel standard used as a replacement for the PC BIOS. It is responsible for the power-on self-test (POST) process, booting the operating system, and providing an interface between the operating system and the physical hardware. EFIs provides the following capabilities:
Solaris 10 provides support for EFI Labels for disks that are larger than 1 terabyte on systems that run a 64-bit Solaris kernel. The Extensible Firmware Interface GUID Partition Table (EFI GPT) disk label provides support for physical disks and virtual disk volumes.
You can download the EFI specification at:
http://www.intel.com/technology/efi/main_specification.htm
You can use the format -e command to apply an EFI label to a disk if the system is running the appropriate Solaris release. However, you should review the important information in Restrictions of the EFI Disk Label before attempting to apply an EFI label.
For additional information about EFI disk labels, managing disks with EFI labels, EFI disk label restrictions, and troubleshooting problems with EFI disk labels, refer to the Solaris 10 Systems Administration Guide at:
http://docs.sun.com
To convert an EFI disk label to an SMI (Solaris) label, delete the EFI fdisk partition, then create a new Solaris fdisk partition. Use the following steps:
Caution - Do not attempt to convert an EFI label to an SMI label using theformat(1m) command. |
1. Use fdisk to delete EFI fdisk.
# fdisk /dev/rdsk/c0t7d0p0 Total disk size is 30400 cylinders Cylinder size is 16065 (512 byte) blocks Cylinders Partition Status Type Start End Length % ========= ====== ============ ===== === ====== === 1 EFI 0 30400 30401 100 SELECT ONE OF THE FOLLOWING: 1. Create a partition 2. Specify the active partition 3. Delete a partition 4. Change between Solaris and Solaris2 Partition IDs 5. Exit (update disk configuration and exit) 6. Cancel (exit without updating disk configuration) Enter Selection: 3 Specify the partition number to delete (or enter 0 to exit): 1 Are you sure you want to delete partition 1? This will make all files and programs in this partition inaccessible (type "y" or "n"). y Total disk size is 30400 cylinders Cylinder size is 16065 (512 byte) blocks Cylinders Partition Status Type Start End Length % ========= ====== ============ ===== === ====== === WARNING: no partitions are defined! (The partition is now deleted. The menu reappears, as shown in Step 2) |
3. Verify that Solaris2 fdisk has been created on the same disk.
For additional information about converting EFI and SMI disk labels, refer to the Solaris 10 Systems Administration Guide at:
http://docs.sun.com
This assumes you have physically inserted a disk and now want to bring the inserted disk online.
If you are replacing a mirrored bootable disk, you should use the Solaris Volume Manager to enable the disk. For additional information, refer to the Solaris Volume Manager Administration Guide (819-2789).
Note - You should predetermine which attachment point the disk is being inserted into before inserting the disk. Refer to FIGURE 8-4 for a listing of disk drives. |
1. Determine the attachment point by typing the following command:
3. Type the following command:
4. Compare the two files by typing the following command:
Information similar to the following is displayed:
5. Remove the temporary files by typing the following command:
From this information you determine that the inserted drive uses SATA port 3 on controller 3.
6. To bring the disk online for the Solaris OS, configure the disk by typing the following commands:
The following information is displayed. For example, the disk node associated with the disk in sata3/3 displays its logical disk node c5t3d0:
Note - If the blue LED does not turn on after one minute, you can have the OS reenumerate device nodes and links by typing: # devfsadm -c. |
The following example shows how to add another mirror to an existing mirrored ZFS configuration on system.
For information see “Replacing a Device in a ZFS Storage Pool” in Chapter 11, ZFS Troubleshooting and Data Recovery of the Solaris ZFS Administration Guide.
Caution - You must follow these steps before removing a disk from service. Failure to follow the procedure can corrupt your data or render your file system inoperable. |
1. Assume you know that the logical disk node is c4t0d0. Type the following command:
The physical slot is displayed, showing where the disk is connected. For example, this hard disk is attached to SATA controller 2, and port 0:
2. Unconfigure the disk before removal. To unconfigure the disk, you must suspend activity on the SATA device. For example, type the following command:
The system displays the following information:
unconfigure sata2/0 Unconfigure the device at: /devices/pci@1,0/pci1022,7458@3/pci11ab,11ab@1:0Continue (yes/no)? yes |
3. Verify that the disk has been unconfigured by typing the following command:
The following information shows that the disk has been unconfigured:
Note - The blue LEDs indicate the disks that are safe to remove. |
4. Remove the disk from the chassis.
Note - If the process of unconfiguring the disk failed, the disk might be in use by ZFS, UFS, or some other entity. See the Correcting Unconfigure Operation Failure. |
This section discusses disk unconfigure operation failure.
If a disk unconfigure operation fails, check to see if the system is in the correct state, and that a utility is not using the disk. When unconfiguring a disk that is part of a ZFS storage pool, the following items are important:
For more information about detaching or replacing disks in storage pool, please refer to the ZFS Administration Guide, 819-5461.
Copyright © 2009 Sun Microsystems, Inc. All rights reserved.