|C H A P T E R 3|
This chapter describes software issues related to the Sun Blade 6000 disks module. The following issues are described:
VMWare ESX does not recognize SCSI IDs (6790706)
VMware ESX 3.5 fails to recognize disks with SCSI IDs greater than 61 on the Sun Blade 6000 disk module. You may encounter this in installations with very high disk counts, which can occur when more than four disk blades are installed in a single Sun Blade 6000 chassis.
When larger configuration exist in the chassis and multipathing is enabled, some disks may not be available for use to ESX.
You may occasionally see the error, but there will be no interruption in service. This problem will be fixed in a future release.
Limit the number of disk blades to four per Sun Blade 6000 chassis.
The Oracle Solaris OS cannot be installed on a RAID array larger than one terabyte.
The Solaris installer OS doesn’t support it.
There is no workaround for this issue.
If you choose to delete all partitions during SUSE Linux installation and you are using a disk blade with two SAS-NEMs and no hardware RAID, you will see a popup window with message "system error code was: -1014". Then, when you click OK, the installation aborts.
|Note - This problem only occurs when you are using an LSI SAS host bus adapter. If you are using an Adaptec host bus adapter, you must create volumes with the BIOS RAID configuration utility, so that the OS is unaware of the second path.|
Under the conditions described, each physical disk is shown to the OS as two logical disks. The SUSE installer is not multi-path aware, so it cannot combine two logical disks into one entity. When you choose to delete all partitions, the installer tries to delete partitions on both logical disks. This operation fails and you get the error message.
There are two possible workarounds:
1. Choose only one instance of the disk for partition deletion: Reboot the system and restart the installation process. Do not choose to delete any partition except the boot and root file system. Once the OS is installed and booted up, you can make make modification to partition tables.
2. Use the LSI BIOS configuration utility to create a hardware RAID volume. Then the OS is unaware of the second path.
On SPARC systems raidctl -l and raidctl -S operations could take more than one minutes per disk.
None at present.Check for the availability of a patch for this problem.
The Solaris format command shows disks as "drive not available" after RAID volumes are created or deleted using raidctl.
There are wo workarounds:
1. Reboot the system.
2. When the format command results in a driver not available message, use the cfgadm -c command to unconfigure the corresponding disk access point, regardless of whether a volume is created or deleted.
Solaris 10 5/08 cannot be installed on server blades whose paired disk blade contains above a certain number of hard drives. A patch must be added to the miniroot of network install servers for users who maintain network install servers.
This is the procedure for adding patch 138076-02 to the x86 miniroot. The procedure must be done on an x86 system running the latest Solaris 10update, with the latest available packaging/patching utilities installed:
1. cd to your Solaris_10/Tools directory.
2. Run setup_install_server to a local directory:
# ./setup_install_server -b /export/home/s10u5_patch
3. Unpack the miniroot:
# /boot/solaris/bin/root_archive unpackmedia /export/home/s10u5_patch /export/home/s10u5_patch_mr
4. Install the patch.
# patchadd -C /export/home/s10u5_patch_mr <patch directory>
5. Pack up the new miniroot:
# /boot/solaris/bin/root_archive packmedia /export/home/s10u5_patch /export/home/s10u5_patch_mr
Now on your install server, use setup_install_server and then copy the newly generated x86.miniroot over:
1. cd to your Solaris_10/Tools directory.
2. Run setup_install_server to a local directory:
# ./setup_install_server /export/home/s10u5_patch
3. Save the old x86.miniroot file:
# cd /export/home/s10u5_patch/boot
# cp -p x86.miniroot x86.miniroot.orig
4. Copy the new x86.miniroot file from the machine on which you built it, for example:
# cp -p /net/<machine_name>/export/home/s10u5_patch/boot/x86.miniroot.
Solaris 10 5/08 cannot be installed on X6220 blades in a Sun Blade 6000 chassis that contains above a certain number of hard drives in disk blades.
An installation from DVD will fail if the chassis is populated with server/disk pairs.
Remove all blades from the chassis except for a single X6220 server blade or a pair of an X6220 server blade and a disk blade. Then install Solaris 10 5/08.
After the installation the system should be booted and patch 138076 applied before repopulating the chassis with other blades.
If MPxIO is enabled for the mpt SAS driver, raidctl cannot be used to create and manage RAID volumes.
Create RAID volumes with the raidctl utility before enabling MPxIO. If you need to change or create RAID volumes after MPxIO is enabled, disable MPxIO first, make the changes or create the RAID volumes, and re-enable MPxIO.
When using raidctl -l with a volume name, the output will truncate the volume name to seven characters if the volume target ID is larger than 100. For example,
# raidctl -l c0t102d0
Volume Size Stripe Status Cache RAID ---------------------------------------------------------------- c0t102d 136.6G 64K OPTIMAL OFF RAID0
Ignore the volume name that gets listed in the output when raidctl -l <volume name> is used. You can still use the rest of the information displayed.
When a system motherboard or a daughter card (like REMs) with an LSI host bus adapter is replaced in the field, raidctl does not allow reactivation of the RAID volumes. The RAID volume information is in metadata on the disks, but the state of the volume is changed in inactive after the replacement.
The raidctl utility does not allow activation of RAID volumes, so that the volume cannot be reactivated on systems running the Solaris OS.
The workaround for SPARC systems is documented in Appendix A of the Sun Blade 6000 Disk Module Service Manual (part number 820-1703).
For x64 systems running the Solaris OS (or Linux or Windows), you can reactivate the array using the LSI or Adaptec BIOS RAID configuration utilities.
For systems running Linux or Windows, you can also use the LSI MSM software or the Sun StorageTek RAID Manager software (Adaptec controllers).
The Solaris raidctl utility cannot set a disk as a hot-spare. The raidctl -a -g options do not work.
None for SPARC systems.
For all OS on x64 servers, you can set hot-spares using the LSI or Adaptec BIOS RAID configuration utilities.
For systems running Linux or Windows, you can also use the MSM software or the Sun StorageTek RAID Manager software (Adaptec controllers).
The raidctl -d operation does not check for mounted RAID volumes and will delete such a volume even if it is mounted.
There is no workaround. Before deleting a volume using the raidctl -d option, please use the mount command to check to see if there are any mounted partitions on the volume.
# raidctl -l | egrep -i volume
Controller: 0 Volume:c0t20d0
To see if any partitions on volume c0t20d0 are mounted, execute this command:
# mount | egrep c0t20d0
/ on /dev/dsk/c0t20d0s0 read/write/setuid/devices/intr/largefiles/logging/xattr/onerror=panic/dev=800008 on Fri Oct 3 16:16:17 2008 /export/home on /dev/dsk/c0t20d0s7 read/write/setuid/devices/intr/largefiles/logging/xattr/onerror=panic/dev=80000f on Fri Oct 3 16:16:28 2008
This output indicates that the volume does have mounted partitions, one of which is the root (boot) partition, so deleting the volume will lose that data and render the system unbootable. Deleting this volume is inadvisable.
After making storage hardware configuration changes, you may see warning messages during Solaris system boot up similar to the following:
WARNING: /pci@0/pci@0/pci@2/scsi@0 (mpt0): mpt_get_sas_device_page0 config: IOCStatus=0x8022, IOCLogInfo=0x30030501WARNING: /pci@0/pci@0/pci@2/scsi@0 (mpt0): mpt_get_sas_device_page0 config: IOCStatus=0x8022, IOCLogInfo=0x30030501WARNING: /pci@0/pci@0/pci@2/scsi@0 (mpt0): mpt_get_sas_device_page0 config: IOCStatus=0x8022, IOCLogInfo=0x30030501WARNING: /pci@0/pci@0/pci@2/scsi@0 (mpt0): mpt_get_sas_device_page0 config: IOCStatus=0x8022, IOCLogInfo=0x30030501WARNING: /pci@0/pci@0/pci@2/scsi@0 (mpt0):
These messages are harmless and may be safely ignored.
Ignore the messages. To eliminate them from future reboots, run devfsadm -C to remove any outdated device links.
Sun Blade T6300 and T6320 Server Modules can hang at boot when Sun Blade 6000 10GbE Multi-Fabric NEM is used. This occurs infrequently.
Reboot or reset from Open boot until fix for is available. Contact Sun Service Center if three successive reboot cycles does not resolve the issue.
By default when Solaris is installed, multi-pathed IO (MPxIO) to Vela disks is disabled. When this feature is enabled by the user, the load-balance variable in the file /kernel/drv/scsi_vhci.conf defaults to round-robin. It should be reset to none.
will cause only one path to be used for active IO, with the other path used for failover.
A serious performance degradation will result if the load-balance variable is left set to round-robin since that would result in IO being attempted on the passive path.
When enabling or disabling MPxIO in conjunction with ZFS root, the system will not reboot cleanly because the mpxio-upgrade does not know how to handle ZFS root.
1. Disable the mpxio-upgrade service.
2. Run /lib/mpxio/stsmboot_util -u
When a Windows Server 2003 (32-bit or 64-bit) OS is installed on a SAS disk in a Server Module disk and there are two Multi-Fabric NEMs in the chassis, each physical disk on a Sun Blade Disk Module will show up as two different disks in Windows. However, only one of these disks can have a physical partition allocation. If you attempt to create another partition on the second disk, Windows Logical Disk Manager will not be able to complete the request.
1. Create a partition on only one of the two disks.
2. Create a hardware RAID volume using the SAS host bus adapter’s RAID configuration utility (entered through the server’s BIOS on initial boot-up). The the OS will see only one disk.
When a Windows Server 2003 (32-bit and 64-bit) OS is installed on a SAS disk in a disk module and there are two Multi-Fabric NEMs in the chassis, there are two paths to the disk where the OS resides. Removing one NEM breaks one path and the OS automatically reboots.
When a Windows Server 2003 (32-bit or 64-bit) OS is installed and there are two Multi-Fabric NEMs in the chassis, each physical disk on Oracle’s Sun Blade Disk Module will show up as two different disks in Windows. However, only one of these disks can have a physical partition allocation. If the user attempts to create another partition with the second disk, Windows Logical Disk Manager will not be able to complete the request.