JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Sun Server X3-2 (formerly Sun Fire X4170 M3)

Product Notes, Version 1.2.1

search filter icon
search icon

Document Information

Using This Documentation

Getting the Latest Software and Firmware

About This Documentation

Related Documentation

Feedback

Support and Accessibility

Sun Server X3-2 Product Notes

Sun Server X3-2 Name Change

Supported Hardware

Supported Firmware Versions

Supported Operating Systems

Important Operating Notes

Operational Changes for UEFI BIOS Configuration

Single-Processor to Dual-Processor Upgrade Is Not Supported

Update Your System to the Latest Software Release

Avoid Overwriting the Embedded Oracle System Assistant USB Flash Drive

Oracle Solaris 10 8/11 Required Patches

Preinstalled Oracle VM Server and Oracle VM Manager Compatibility Requirements

Supported Operating System Limitations

Update HBA Firmware to Support UEFI BIOS

Segfaults Might Occur on Servers Running 64-bit Linux Operating Systems

Failure of a Single Server Fan Module Might Impact Performance

Standby Over-temperature Protection

MAC Address Mapping to Ethernet Ports

Inspect Grounding Strap on 3.5-inch HDD Bracket Before Installing

Battery Module

Server Management Tools

Supported PCIe Cards

Resolved Issues

Resolved Issue for This Software Release

Resolved Issues for Previous Software Releases

Known Issues

Hardware Known Issues

Oracle Solaris Operating System Known Issues

Linux Operating Systems and Virtual Machine Known Issues

Oracle System Assistant Known Issues

Documentation Known Issues

Software Release 1.2.1 Documentation Incorrectly Implies Ability to Set RAID Volume as Global Hot Spare

Cautions Statements Added to the Sun Server X3-2 Service Manual

Correction to the Version of the Oracle VM Server Software Preinstalled on the Server

Note Changed to Caution Statement in the Sun Server X3-2 Installation Guide for Oracle VM

Corrections to the Sun Server X3-2 Installation Guide for Linux Operating Systems

Incorrect Layout of the Sun Server X3-2 Front Panel Controls and Indicators Documented in the Sun Server X3-2 Installation Guide and the Sun Server X3-2 Service Manual

Sun Server X3-2 Installation Guide Updated to Include Configuration Instructions for the Oracle Linux Operating System Preinstalled on the Server

Sun Server X3-2 Service Manual Updated to Include URLs to the Sun Server X3-2 Service Animations

Operating Altitude Limits for China Markets

Translated Documents Use Abbreviated Titles

Incorrect Server Name in the Printed Getting Started Guide

Getting Server Firmware and Software

Firmware and Software Updates

Firmware and Software Access Options

Available Software Release Packages

Accessing Firmware and Software

Download Firmware and Software Using My Oracle Support

Requesting Physical Media

Gathering Information for the Physical Media Request

Installing Updates

Installing Firmware

Installing Hardware Drivers and OS Tools

Hardware Known Issues

Table 8 Hardware Known Open Issues

BugDB
Description
15584702 (formerly CR 6875309)
MegaRAID mouse pointer does not work on Oracle ILOM Remote Console.
Issue:

When using the Oracle ILOM Remote Console (with the mouse mode set to Absolute) on a server with a Sun Storage 6 Gb SAS PCIe RAID HBA Internal option card installed, if you boot the system and press Ctrl+H to enter the LSI MegaRAID BIOS Utility, the mouse pointer only moves vertically and horizontally on the left and top sides of the utility.

Affected hardware and software:
  • Sun Storage 6 Gb SAS PCIe RAID HBA, Internal option card (SGX-SAS6-R-INT-Z and SG-SAS6-R-INT-Z)

  • Releases 1.0, 1.1, 1.1.1, and 1.2

Workaround:

In the Oracle ILOM Remote Console, change the mouse mode setting from Absolute (the default) to Relative mode.For instructions for setting Oracle ILOM Remote Console to Relative mode, see the Oracle ILOM 3.1 Documentation Library at: http://www.oracle.com/pls/topic/lookup?ctx=ilom31

15736328 (formerly CR 7080526)
UEFI configuration settings might be lost when transitioning between UEFI BIOS and Legacy BIOS.
Issue:

Unified Extensible Firmware Interface (UEFI) boot priority list settings might be lost when transitioning between UEFI BIOS and Legacy BIOS. This issue might occur if you need to run system diagnostics using the Pc-Check utility, which only runs in the Legacy BIOS. UEFI configuration settings should be saved prior to switching between UEFI BIOS and Legacy BIOS.

Affected software:
  • Releases 1.0, 1.1, 1.1.1, and 1.2

Workaround:

Use the Oracle ILOM BIOS Configuration Backup and Restore feature to save configuration settings prior to transitioning between the BIOS modes. Then restore the BIOS configuration settings after transitioning back to UEFI mode. For more information and procedures for saving UEFI configuration settings, refer to the Oracle ILOM 3.1 Configuration and Maintenance Guide in the Oracle Integrated Lights Out Manager (ILOM) 3.1 Documentation Library at: http://www.oracle.com/pls/topic/lookup?ctx=ilom31

15763252 (formerly CR 7125220)
System hangs when the server is booted in UEFI BIOS mode if the Sun Storage 6 Gb SAS PCIe RAID HBA is running an old version of the LSI firmware, version 10M09P9 or older.
Issue:

Specifically, the Driver Configuration Protocol has to be called on each device handle (there is no mechanism to associate it with the host bus adapter (HBA) device after it is installed). The Unified Extensible Firmware Interface (UEFI) specification states that the protocol is supposed to return EFI_UNSUPPORTED if called for the wrong device. Instead, if older LSI firmware is running on the HBA, the HBA driver attempts to use the device without checking it, which causes a processor exception. The newer version of the HBA LSI firmware has fixed the protocol to check the device and return the appropriate status code.

Affected hardware and software:
  • Sun Storage 6 Gb SAS PCIe RAID HBA, Internal option card (SGX-SAS6-R-INT-Z and SG-SAS6-R-INT-Z)

  • Releases 1.0, 1.1, 1.1.1, and 1.2

Workaround:

If this problem is encountered, you can recover in either of the following ways:

  • Update the HBA LSI firmware. For instructions for updating the HBA firmware, see Update HBA Firmware to Support UEFI BIOS.

  • Use Oracle ILOM to restore BIOS to the default settings. This reverts the BIOS back to Legacy BIOS, which is the factory default.

  • Use Oracle ILOM BIOS Configuration Backup and Restore to change the UEFI Boot Mode Option back to Legacy BIOS, instead of UEFI BIOS.

15735895 (formerly CR 7079855)
BIOS might not respond to a USB keyboard and/or mouse that is connected directly to the server.
Issue:

On rare occasions, when a USB keyboard and/or mouse is directly connected to the server, the keyboard and/or mouse might not be recognized by BIOS. This problem is indicated by a failure of BIOS to respond to key presses during the time the BIOS splash screen is displayed.

Affected software:
  • Releases 1.0, 1.1, 1.1.1, and 1.2

Workaround:

Reboot the host. If the problem persists after two or three reboots, contact your authorized Oracle service provider for assistance.

15761342 (formerly CR 7121782)
BIOS might hang when a key is entered in response to a prompt from BIOS.
Issue:

On rare occasions, BIOS might hang when a key is entered in response to a prompt from BIOS for F2, F8, or F12 input. The prompt, and resulting hang, based on key input, might look similar the following:

Version 2.14.1219. Copyright (C) 2011 American Megatrends, Inc.

BIOS Date: 12/09/2011 10:23:55 Ver: 18010900

Press F2 to run Setup (CTRL+E on serial keyboard)

Press F8 for BBS Popup (CTRL+P on serial keyboard)

Press F12 for network boot (CTRL+N on serial keyboard)

Entering Setup...B2

Affected software:
  • Releases 1.0, 1.1, 1.1.1, and 1.2

Workaround:

Reboot the host. If the problem persists after two or three system reboots, contact your authorized Oracle service provider for assistance.

15788976 (formerly CR 7165568)
GRUB boot loader can only boot from the first eight hard drives in a system.
Issue:

Some versions of the GRUB boot loader can only boot from the first eight hard drives in a system. It is possible to install the operating system (OS) and boot loader to a drive that is ninth or higher in the list of drives connected to host bus adapters (HBAs) with Option ROMs enabled. However, when the system is rebooted after the OS installation, the GRUB boot loader will hang at the GRUB prompt, and will not execute disk I/O operations to load the OS from the disk drive.

Affected software:
  • Oracle Linux 6.1, using Unified Extensible Firmware Interface (UEFI) BIOS or Legacy (non-UEFI) BIOS

  • Red Hat Enterprise Linux (RHEL) 6.1, using UEFI BIOS or Legacy BIOS

  • SUSE Linux Enterprise Server (SLES) 11 SP1/SP2, using Legacy BIOS

  • Oracle Linux 5.7, 5.8 using Legacy BIOS

  • RHEL 5.7, 5.8 using Legacy BIOS

  • Oracle VM 3.0 and 3.1 using Legacy BIOS

  • Releases 1.0, 1.1, 1.1.1, and 1.2

Workaround:

Depending on your operating system and your BIOS configuration, choose one of the following solutions.

Solution 1 (Supporting all operating systems and either Legacy BIOS or UEFI BIOS configurations):

  1. Rearrange the disk drives and reinstall the operating system and boot loader to any one of the first eight disk drives in the system. This method might require you to enter the BIOS Setup Utility and disable the Option ROMs of HBAs that are connected to disk drives that are not used for system boot.

    For information on entering the BIOS Setup Utility and changing Option ROM settings of HBAs, see “Configure Option ROM Settings” in the Sun Server X3-2 Administration Guide.

15788976 formerly CR 7165568) (Continued)
Workaround:

Solution 2 (Supporting Oracle Linux 6.1 and RHEL 6.1 in a Legacy BIOS configuration):

This procedure details the process of updating the GRUB RPM of the OS, and reinstalling GRUB to the MBR of the disk drive from a rescue environment. For more information on updating the GRUB MBR boot code from a rescue environment, see http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/ap-rescuemode.html#Rescue_Mode-x86

Before you begin, you will need to obtain the Oracle Linux 6.2 or RHEL 6.2 installation media as appropriate.

  1. Boot the system from the Oracle Linux 6.2 or RHEL 6.2 installation boot media.

  2. From the installation prompt, type linux rescue to enter the rescue environment.

  3. Create a directory for the installation media.

    mkdir /mnt/cd

  4. Mount the installation media.

    mount -o ro /dev/sr0 /mnt/cd

    cp /mnt/cd/Packages/grub-0.97-75*rpm /mnt/sysimage

  5. Enter change root environment on the root partition.

    chroot /mnt/sysimage

    yum localupdate /grub-0.97-75*rpm || rpm -Uvh /grub-0.97-75*rpm

  6. Reinstall the GRUB boot loader.

    /sbin/grub-install bootpart

    where bootpart is the boot partition (typically, /dev/sda).

  7. Review the /boot/grub/grub.conf file, as additional entries might be needed for GRUB to control additional operating systems.

  8. Reboot the system.

    -> reset /System

15788976 formerly CR 7165568) (Continued)
Workaround:

Solution 3 (Supporting Oracle Linux 6.1 and RHEL 6.1 in a UEFI BIOS configuration):

This procedure details the process of updating the grub.efi binary by updating the GRUB RPM to the latest version from a rescue environment. For more information on updating the GRUB RPM from a rescue environment, see http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/ap-rescuemode.html#Rescue_Mode-x86.

Before you begin, you will need to obtain the Oracle Linux 6.2 or RHEL 6.2 installation media as appropriate.

  1. Boot the system from the Oracle Linux 6.2 or RHEL 6.2 installation boot media as appropriate.

  2. From the UEFI boot loader menu, type linux rescue to enter the rescue environment.

  3. Mount the installation media.

    mount -o ro /dev/sr0 /mnt/cd

    cp /mnt/cd/Packages/grub-0.97-75*rpm /mnt/sysimage

  4. Create a directory for the installation media.

    mkdir /mnt/cd

  5. Enter change root environment on the root partition.

    chroot /mnt/sysimage

    yum localupdate /grub-0.97-75*rpm || rpm -Uvh /grub-0.97-75*rpm

  6. Exit the root environment.

    chroot env

  7. Exit rescue mode.

  8. Reboot the system.

    -> reset /System

15784988 (formerly CR 7160733)
Using any operating system tools or utilities to manage (create, modify, or delete) UEFI Boot variables might result in the loss of a boot variable needed to start the operating system.
Issue:

During operating system installations in Unified Extensible Firmware Interface (UEFI) mode, operating system installers will create UEFI Boot variables to be used in BIOS menus to select the operating system to boot. To avoid potential loss of a boot variable created by the operating system installer, you should not use any operating system tools or utilities to manage (create, modify, or delete) these boot variables. Loss of a boot variable will preclude users from being able to boot the operating system.

Affected software:
  • All supported UEFI capable operating systems

  • Releases 1.0, 1.1, 1.1.1, and 1.2

Workaround:

If a UEFI Boot variable is lost, reinstall the operating system so as to create a new UEFI Boot variable.

15789031 (formerly CR 7165622)
On servers configured with the Sun Storage 6 Gb SAS PCIe Internal HBA and UEFI mode selected in BIOS, attempts to install the Windows Server 2008 operating system to a newly created R1 or R10 RAID volume fail.
Issue:

Note - This problem does not occur in Legacy BIOS. If you are installing Windows Server 2008 using Legacy BIOS, you will not experience this problem.


In Unified Extensible Firmware Interface (UEFI) BIOS, during setup of Windows Server 2008 R2 SP1 or Windows Server 2008 SP2, the installer is unable to detect a newly created R1 or R10 RAID volume. This might occur in cases where the Sun Storage 6 Gb PCIe Internal host bus adapter (HBA) has had other disks or prior RAID configurations configured on it.

This problem is due to an issue with the way data is managed within a mapping table used in the HBA NVRAM when in UEFI mode. If multiple RAID configurations are created, then removed (such as might occur in a test environment), the entries used in the mapping table might be filled and new configurations cannot be added. This happens because the stale data in the mapping table from prior configurations is not purged.

Affected hardware and software:
  • Windows Server 2008 R2 SP1 and Windows Server 2008 SP2

  • Sun Storage 6 Gb SAS PCIe Internal HBA (SG-SAS6-INT-Z and SGX-SAS6-INT-Z)

  • Releases 1.0, 1.1, 1.1.1, and 1.2

Workaround:

To clear entries in the HBA NVRAM mapping table, perform the following steps:

  1. Reset or power cycle the host, and when BIOS starts, press F2 to access the BIOS Setup Utility.

  2. In the BIOS Setup Utility screen, select the Boot Menu, temporarily change the UEFI/BIOS Boot Mode to Legacy BIOS, and press F10 to save the change and exit BIOS.

  3. When BIOS restarts, press F8, and then watch the monitor for the LSI BIOS to start.

    At the F8 boot menu, you should see the logical volume.

  4. At the F8 boot menu, scroll down and enter the BIOS Setup Utility again.

  5. In the BIOS Setup Utility screen, select the Boot Menu, change the UEFI/BIOS Boot Mode back to UEFI, and press F10 to save the change and exit BIOS.

  6. Restart Windows Server 2008 setup.

    On the next boot attempt, the Windows Server 2008 installer will see the logical volume.

15790853 (formerly CR 7167796)

Oracle ILOM BIOS Configuration Backup and Restore should not report “Partial Restore” status.
Issue:

Any time an Oracle ILOM Unified Extensible Firmware Interface (UEFI) BIOS configuration is loaded, the configuration file might contain inactive parameters, that is, parameters that are no longer valid for the current version of the BIOS, or typographical errors. This can result in the failure of one or more parameters to load. When this occurs, the Oracle ILOM /System/BIOS/Config/restore_status parameter, which provides the user with the status of the last attempted configuration load, will report that the load was partially successful. The value of /System/BIOS/Config/restore_status parameter will not change until a subsequent load of an Oracle ILOM UEFI BIOS configuration occurs.

Affected software:
  • Oracle ILOM 3.1

  • Releases 1.0, 1.1, 1.1.1, and 1.2

Workaround:
  1. Using a text editor, create an XML file with the following contents:

    <BIOS>

    </BIOS>

  2. Save the file to any XML file name.

    For purposes of this example, the file name used is bios_no_op_config.xml

  3. To load the configuration, enter the following command:

    % load -source <URI_location>/bios_no_op_config.xml /System/BIOS/Config

  4. If host power is on, enter the following Oracle Integrated Lights Out Manager (ILOM) command to reset the host:

    -> reset /System

15785186 (formerly CR 7160984)

Emulex HBA: UEFI “Add Boot Device” hangs when invoked if “Scan Fibre Devices” is not run first.
Issue:

Note - This problem only occurs on Emulex host bus adapters (HBAs) running EFIBoot version 4.12a15 firmware. If you are running a different version of the HBA firmware, you will not experience this issue.


At the UEFI Driver control HII menu for the Emulex HBA, with Set Boot From San set to enabled, if you run the Add Boot Device function, you will see the Please Wait message for approximately 3 to 5 seconds, and then the system hangs. You must reset the server to clear the server hang condition.

However, if you run the Scan Fibre Devices function first, and then you run the Add Boot Device function, the Add Boot Device function works correctly. The hang condition only occurs if the Add Boot Device function is run first.

Affected hardware and software:
  • Sun StorageTek 8 Gb FC PCIe HBA Dual Port Emulex, with EFIBoot version 4.12a15 firmware (SG-PCIE2FC-EM8-Z and SG-XPCIE2FC-EM8-N)

  • Releases 1.0, 1.1, 1.1.1, and 1.2

Workaround:

To recover from this hang condition, enter the following command to power cycle the host:

-> reset /System

15787798 (formerly CR 7164218)

MegaRAID Storage Manager V11.08.03.02 is not able to assign hot spares if the Sun Storage 6 GB SAS PCIe HBA, Internal based virtual drive is constructed on 3-TB drives using EFI partitioning.
Issue:

MegaRAID Storage Manager V11.08.03.02 is not able to assign hot spares if the virtual drive is based on the Sun Storage 6 Gb SAS PCIe HBA, Internal option card, and is constructed on 3-terabyte (3-TB) drives using Extensible Firmware Interface (EFI) partitioning.

Affected hardware and software:
  • Sun Storage 6 Gb SAS PCIe RAID HBA, Internal option card (SGX-SAS6-INT-Z and SG-SAS6-INT-Z)

  • MegaRAID Storage Manager V11.08.03.02

  • Releases 1.0, 1.1, 1.1.1, and 1.2

Workaround:

Use the sas2ircu Utility until the defect in MegaRAID Storage Manager is corrected.

15803551,15803553 (formerly CRs 7183782, 7183789)

On single-processor systems, some Oracle ILOM web interface System Information screens show an incorrect number of Ethernet ports and PCIe ports available for use.
Issue:

In single-processor systems, Ethernet ports NET 2 and NET 3, and PCIe slot 1 are nonfunctional. However, the following Oracle Integrated Lights Out Manager (ILOM) web interface screens incorrectly show these ports as available for use:

  • The Oracle ILOM System Information > Summary screen and the System Information > Networking screen show the number of supported Ethernet NICs (Network Interface Controllers) as 4, when actually only two Ethernet NICs (NET 0 and NET 1) are supported and available for use.

  • The Oracle ILOM System Information > PCI Devices screen shows the Maximum Add-on Devices as 4, when actually only three PCIe slots (slots 2, 3, and 4) are supported and available for use. This screen also shows the number of On-board Devices (NICs) as 4, when actually only NET 0 and NET 1 are supported and available for use.

Affected hardware and software:
  • Single-processor systems

  • Oracle ILOM 3.1

  • Releases 1.1, 1.1.1, and 1.2

Workaround:

None.

15803564 (formerly CR 7183799)

On single-processor systems, some Oracle ILOM CLI commands and web interface System Information screens show an incorrect number of supported DIMM sockets.
Issue:

For the Oracle Integrated Lights Out Manager (ILOM) command-line interface (CLI), the show /System/memory command will incorrectly return max DIMMs = 16, when the maximum number of DIMMs supported in a single-processor system is 8.

Additionally, if a DIMM is mistakenly installed in a socket associated with processor 1 (P1), the following Oracle ILOM CLI commands will identify the misconfiguration by showing the DIMM associated with P1, even though P1 is not actually present in the system. Note, however, that the DIMM will not be usable by the system.

  • -> show /System/Memory/DIMMs

  • -> show /System/Memory/DIMMs/DIMM_n, where n can be any number from 8 through 15

  • -> show /SP/powermgmt/powerconf/memory

  • -> show /SP/powermgmt/powerconf/memory/MB_P1_D0

For the Oracle ILOM web interface, the System Information > Summary screen and the System Information > Memory screen incorrectly show the maximum number of DIMMs supported as 16, when the maximum number of DIMMs supported on a single-processor system is 8.

Affected hardware and software:
  • Single-processor systems

  • Oracle ILOM 3.1

  • Releases 1.1, 1.1.1, and 1.2

Workaround:

None.

15802805 (formerly CR 7182919)

On servers configured with Internal and External Sun Storage 6 Gb SAS PCIe HBA cards, the storage drives are not detected by BIOS at boot time.
Issue:

If the server is configured with a Sun Storage 6 Gb SAS PCIe Internal host bus adapter (HBA) installed in PCIe slot 4 and a Sun Storage 6 Gb SAS PCIe External HBA installed in one of the external PCIe slots (slots 1, 2, or 3), the storage drives are not detected during the BIOS boot. As a result, Pc-Check will not detect and test the internal storage drives and you will not be able to designate an internal storage drive as the boot drive.

Affected hardware and software:
  • Sun Storage 6 Gb SAS PCIe Internal HBA (SG-SAS6-INT-Z and SGX-SAS6-INT-Z)

  • Sun Storage 6 Gb SAS PCIe External HBA (SG-SAS6-EXT-Z and SGX-SAS6-EXT-Z)

  • Releases 1.1, 1.1.1, and 1.2

15802805 (formerly CR 7182919) (Continued)
Workaround:

To reconfigure the internal and external HBA cards so that the internal storage drives are detected at boot time, perform the following steps:

  1. Reboot the server.

    As the BIOS boots, the LSI Corporation MPT SAS2 BIOS screen appears.

  2. When the "Type Control+C to enter SAS Configuration Utility" message appears, type: Ctrl+C.

    The LSI Corp Config Utility screen appears.

    Notice that the internal PCIe card (SG-SAS6-INT-Z) does not appear in the Boot Order (it is not assigned a number).

  3. Press the right arrow key to select the Boot Order column.

  4. Press the Insert key (Alter Boot List).

    The number 1 is inserted next to the internal PCIe card (SG-SAS6-INT-Z).

  5. To change the boot order, press the - (minus) key (Alter Boot Order).

    The number for the Boot order for the internal PCIe card is changed to 0 and the boot order for the external PCIe card (SG-SAS6-EXT-Z) is change to 1.

  6. Use the arrow keys to select the Boot Order column for the external PCIe card and press the Del key (Alter Boot List) to remove that card from of the boot order.

  7. To exit the LSI Corp Config Utility, press the Esc key.

    An Exit Confirmation window appears.

  8. In the Exit Confirmation window, scroll down to "Save Changes and Reboot" and press the Enter key.

  9. When the BIOS screen appears, press the F2 to enter the BIOS Setup Utility.

    The BIOS Main Menu screen appears.

  10. In the BIOS Main Menu screen, select the Boot option in the menu bar.

    The Boot Menu screen appears.

  11. Verify that the server's internal storage drives are now displayed in the Boot Menu screen.

You can now select an internal storage drive to be at the top of the boot list.

16014346

Unable to obtain DHCP lease at boot time with Red Hat Enterprise Linux operating systems.
Issue:

For configurations in which the auto-negotiation process takes more than five seconds, the boot script might fail with the following message:

ethX: failed. No link present. Check cable?

Affected hardware and software:
  • Red Hat Enterprise Linux operating systems

  • Release 1.2

Workaround:

If this error message appears, even though the presence of a link can be confirmed using the ethtool ethX command, try this setting: LINKDELAY=5 in /etc/sysconfig/network-scripts/ifcfg-ethX.


Note - Link time can take up to 30 seconds. Adjust the LINKDELAY value accordingly.


The host might reboot after removing the Physical Function (PF) device driver when the Virtual Function (VF) device driver is active in guest.

Alternatively, you can use NetworkManager to configure the interfaces, which avoids the set timeout. For configuration instructions for using NetworkManager, refer to the documentation provided with your distribution.