JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Sun Server X3-2L (formerly Sun Fire X4270 M3)

Product Notes, Version 1.3

search filter icon
search icon

Document Information

Using This Documentation

Getting the Latest Software and Firmware

About This Documentation

Related Documentation

Feedback

Access to Oracle Support

Sun Server X3-2L Product Notes

Sun Server X3-2L Name Change

Supported Hardware

Supported Firmware Revisions

Supported Operating Systems

Important Operating Notes

Operational Changes for UEFI BIOS Configuration

Single-Processor to Dual-Processor Upgrade Is Not Supported

Update Your System to the Latest Software Release

Avoid Overwriting the Embedded Oracle System Assistant USB Flash Drive

Oracle Solaris 10 8/11 Required Patches

Preinstalled Oracle VM Server and Oracle VM Manager Compatibility Requirements

Supported Operating System Limitations

Update HBA Firmware to Support UEFI BIOS

Segfaults Might Occur on Servers Running 64-bit Linux Operating Systems

Failure of a Single Server Fan Module Might Impact Performance

MAC Address Mapping to Ethernet Ports

Rear-Mounted HDD Naming When Using Oracle Solaris

Inspect Grounding Strap on 3.5-inch HDD Bracket Before Installing HDDs

Battery Module

Server Management Tools

Supported PCIe Cards

Resolved Issues

Resolved Issues for Previous Software Releases

Known Issues

BIOS Known Issues

Hardware Known Issues

Oracle System Assistant Known Issues

Oracle Solaris Operating System Known Issues

Linux Operating Systems and Virtual Machine Known Issues

Windows Known Issues

VMware ESXi Known Issue

Documentation Updates

Getting Server Firmware and Software

Firmware and Software Updates

Firmware and Software Access Options

Software Releases

Getting Firmware and Software From MOS or PMR

Download Firmware and Software Using My Oracle Support

Requesting Physical Media

Gathering Information for the Physical Media Request

Installing Updates Using Other Methods

Hardware Known Issues

Table 8 Hardware Known Open Issues 

BugDB
Description
17977420
Solaris 10 U11 installation fails in a system configured with InfiniBand CX2 card.
Issue:

When Solaris 10 U11 is installed on a system with a configured InfiniBand CX2 card, the Solaris installation will fail.

Affected hardware and software:
  • Sun Server X3-2L Software 1.3

  • InfiniBand CX2 card

  • Solaris 10 U11

Workaround:

Disable PCI 64-bit resource allocation in BIOS before installing Solaris 10 U11.

17848060
Sun Storage 10 GbE FCoE PCIe and/or ExpressModule converged network adapter card does not support Windows 2012 R2 driver.
Issue:

When Windows 2012 R2 is installed on a system with a configured Sun Storage 10 GbE FCoE PCIe and/or ExpressModule card, neither Windows nor OSA installs a driver.

Affected hardware and software:
  • Sun Storage 10 GbE FCoE PCIe and/or ExpressModule card

  • Windows 2012 R2

Workaround:

There is currently no workaround or fix available.

None
PCIe slot 1 on the server service label and the server rear panel is mislabeled.
Issue:

PCIe slot 1 on the server service label and the server rear panel is mislabeled. PCIe slot 1 supports an x16 electrical interface, but is labeled incorrectly as x8.

Affected hardware and software:
  • Sun Server X3-2L

  • Releases 1.0, 1.1, 1.2, and 1.3

Workaround:

The nomenclature for PCIe slot 1 on the service label and the back panel label will be revised on subsequent releases of the system.

15584702

(formerly CR 6875309)

MegaRAID mouse pointer does not work on Oracle ILOM Remote Console.
Issue:

When using the Oracle ILOM Remote System Console (with the mouse mode set to Absolute) on a server with a Sun Storage 6 Gb SAS PCIe RAID HBA Internal option card installed, if you boot the system and press Ctrl+H to enter the LSI MegaRAID BIOS Utility, the mouse pointer only moves vertically and horizontally on the left and top sides of the utility.

Affected hardware and software:
  • Sun Storage 6 Gb SAS PCIe RAID HBA Internal (SGX-SAS6-R-INT-Z and SG-SAS6-R-INT-Z)

  • Releases 1.0, 1.1, 1.2, and 1.3

Workaround:

In the Oracle ILOM Remote System Console, change the mouse mode setting from Absolute (the default) to Relative mode.

For instructions for setting Oracle ILOM Remote System Console to Relative mode, see the Oracle ILOM 3.1 Documentation Library at: http://www.oracle.com/pls/topic/lookup?ctx=ilom31.

None
SAS expander firmware must be updated prior to updating HBA firmware.
Issue:

On systems with an Sun Storage 6 Gb SAS PCIe HBA Internal host bus adapter (HBA), it is critical that the SAS expander firmware is updated to version 0901 prior to updating the HBA firmware to version 11.00.00.00. If the HBA firmware is updated before the SAS expander, the system will not boot.


Note - See also CR 7095163 in these product notes for important related information.


Affected hardware and software:
  • Sun Storage 6 Gb SAS PCIe HBA Internal (SG-SAS6-INT-Z and SGX-SAS6-INT-Z)

  • Releases 1.0, 1.1, 1.2, and 1.3

Workaround:

Use Oracle System Assistant to update the system firmware. Oracle System Assistant updates components automatically and will always update the SAS expander before updating the HBA. However, if you elect to update components one at a time (for example, by unchecking components in the Oracle System Assistant preview list for the Update Firmware task), it is critical to not update the HBA prior to the SAS expander.

15763252 (formerly CR 7125220)
System appears to hang when booted in UEFI BIOS mode if the Sun Storage 6 Gb SAS PCIe RAID HBA is running an old version of the LSI firmware, version 10M09P9 or older.
Issue:

Specifically, the Driver Configuration Protocol has to be called on each device handle (there is no mechanism to associate it with the HBA device after it is installed). The Unified Extensible Firmware Interface (UEFI) specification states that the protocol is supposed to return EFI_UNSUPPORTED if called for the wrong device. Instead, if older LSI firmware is running on the HBA, the HBA driver attempts to use the device without checking it, which causes a processor exception. The newer version of the HBA LSI firmware has fixed the protocol to check the device and return the appropriate status code.

Affected hardware and software:
  • Sun Storage 6 Gb SAS PCIe RAID HBA, Internal option card (SGX-SAS6-R-INT-Z and SG-SAS6-R-INT-Z)

  • Releases 1.0, 1.1, 1.2, and 1.3

Workaround:

If the problem is encountered, you can recover in either of the following ways:

  • Update the HBA LSI firmware. For instructions for updating the HBA firmware, see Update HBA Firmware to Support UEFI BIOS.

  • Use Oracle ILOM to restore BIOS to the default settings. This reverts the BIOS mode back to Legacy mode, which is the factory default.

  • Use the Oracle ILOM BIOS Configuration Backup and Restore feature to change the UEFI Boot Mode Option back to Legacy BIOS, instead of UEFI BIOS.

15788976 (formerly CR 7165568)
GRUB boot loader can only boot from the first eight hard drives in a system.
Issue:

Some versions of the GRUB boot loader can only boot from the first eight hard drives in a system. It is possible to install the operating system (OS) and boot loader to a drive that is ninth or higher in the list of drives connected to host bus adapters (HBAs) with Option ROMs enabled. However, when the system is rebooted after the OS installation, the GRUB boot loader will hang at the GRUB prompt, and will not execute disk I/O operations to load the OS from the disk drive.

Affected software:
  • Oracle Linux 6.1, using Unified Extensible Firmware Interface (UEFI) BIOS or Legacy (non-UEFI) BIOS

  • Red Hat Enterprise Linux (RHEL 6.1) using UEFI BIOS or Legacy BIOS

  • SUSE Linux Enterprise Server (SLES) 11 SP1/SP2, using Legacy BIOS

  • Oracle Linux 5.7 and 5.8 using Legacy BIOS

  • RHEL 5.7 and 5.8 using Legacy BIOS

  • Oracle VM 3.0 and 3.1 using Legacy BIOS

  • Releases 1.0, 1.1, 1.2, and 1.3

Workaround:

Depending on your operating system and your BIOS configuration, choose one of the following solutions.

Solution 1 (Supporting all operating systems and either Legacy BIOS or UEFI BIOS configurations):

  1. Rearrange the disk drives and reinstall the operating system and boot loader to any one of the first eight disk drives in the system. This method might require you to enter the BIOS Setup Utility and disable the Option ROMs of HBAs that are connected to disk drives that are not used for system boot.

    For information on entering the BIOS Setup Utility and changing Option ROM settings of HBAs, see “Configure Option ROM Settings” in the Sun Server X3-2L Administration Guide.

15788976 (Continued)
Workaround:

Solution 2 (Supporting Oracle Linux 6.1 and RHEL 6.1 in a Legacy BIOS configuration):

This procedure details the process of updating the GRUB RPM of the OS, and reinstalling GRUB to the MBR of the disk drive from a rescue environment. For more information on updating the GRUB MBR boot code from a rescue environment, see http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/ap-rescuemode.html#Rescue_Mode-x86.

Before you begin, you will need to obtain the Oracle Linux 6.2 or RHEL 6.2 installation media.

  1. Boot the system from the Oracle Linux 6.2 or RHEL 6.2 installation boot media, as appropriate.

  2. From the installation prompt, type linux rescue to enter the rescue environment.

  3. Create a directory for the installation media.

    mkdir /mnt/cd

  4. Mount the installation media.

    mount -o ro /dev/sr0 /mnt/cd

    cp /mnt/cd/Packages/grub-0.97-75*rpm /mnt/sysimage

  5. Enter change root environment on the root partition.

    chroot /mnt/sysimage

    yum localupdate /grub-0.97-75*rpm || rpm -Uvh /grub-0.97-75*rpm

  6. Reinstall the GRUB boot loader.

    /sbin/grub-install bootpart

    where bootpart is the boot partition (typically, /dev/sda).

  7. Review the /boot/grub/grub.conf file, as additional entries might be needed for GRUB to control additional operating systems.

  8. Reboot the system.

    reset /System

15788976 (Continued)
Workaround:

Solution 3 (Supporting Oracle Linux 6.1 and RHEL 6.1 in a UEFI BIOS configuration):

This procedure details the process of updating the grub.efi binary by updating the GRUB RPM to the latest version from a rescue environment. For more information on updating the GRUB RPM from a rescue environment, see http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/ap-rescuemode.html#Rescue_Mode-x86.

Before you begin, you will need to obtain the Oracle Linux 6.2 or RHEL 6.2 installation media.

  1. Boot the system from the Oracle Linux 6.2 or RHEL 6.2 installation boot media, as appropriate.

  2. From the UEFI boot loader menu, select rescue to enter the rescue environment.

  3. Create a directory for the installation media.

    mkdir /mnt/cd

  4. Mount the installation media.

    mount -o ro /dev/sr0 /mnt/cd

    cp /mnt/cd/Packages/grub-0.97-75*rpm /mnt/sysimage

  5. Enter change root environment on the root partition.

    chroot /mnt/sysimage

    yum localupdate /grub-0.97-75*rpm || rpm -Uvh /grub-0.97-75*rpm

  6. Exit the root environment.

    chroot env

  7. Exit rescue mode.

  8. Reboot the system.

    reset /System

15789031 (formerly CR 7165622)
On servers configured with the Sun Storage 6 Gb SAS PCIe Internal HBA and UEFI mode selected in BIOS, attempts to install the Windows Server 2008 operating system to a newly created R1 or R10 RAID volume fail.
Issue:

Note - This problem does not occur in Legacy BIOS mode. If you are installing Windows Server 2008 in Legacy mode, you will not experience this problem.


In Unified Extensible Firmware Interface (UEFI) BIOS mode, during setup of Windows Server 2008 R2 SP1 or Windows Server 2008 SP2, the installer is unable to detect a newly created R1 or R10 RAID volume. This might occur in cases where the Sun Storage 6 Gb PCIe Internal host bus adapter (HBA) has had other disks or prior RAID configurations configured on it.

This problem is due to an issue with the way data is managed within a mapping table used in the HBA NVRAM when in UEFI mode. If multiple RAID configurations are created, then removed (such as might occur in a test environment), the entries used in the mapping table might be filled and new configurations cannot be added. This happens because the stale data in the mapping table from prior configurations is not purged.

Affected hardware and software:
  • Windows Server 2008 R2 SP1 and Windows Server 2008 SP2

  • Sun Storage 6 Gb SAS PCIe Internal HBA (SG-SAS6-INT-Z and SGX-SAS6-INT-Z)

  • Releases 1.0, 1.1, 1.2, and 1.3

Workaround:

To clear entries in the HBA NVRAM mapping table, perform the following steps:

  1. Reset or power cycle the host, and when BIOS starts, press F2 to access the BIOS Setup Utility.

  2. In the BIOS Setup Utility screen, select the Boot Menu, temporarily change the UEFI/BIOS Boot Mode to Legacy BIOS, and press F10 to save the change and exit BIOS.

  3. When BIOS restarts, press F8, and then watch the monitor for the LSI BIOS to start.

    At the F8 boot menu, you should see the logical volume.

  4. At the F8 boot menu, scroll down and enter the BIOS Setup Utility again.

  5. In the BIOS Setup Utility screen, select the Boot Menu, change the UEFI/BIOS Boot Mode back to UEFI, and press F10 to save the change and exit BIOS.

  6. Restart Windows Server 2008 setup.

    On the next boot attempt, the Windows Server 2008 installer will see the logical volume.

15785186 (formerly CR 7160984)
Emulex HBA: UEFI “Add Boot Device” hangs when invoked if “Scan Fibre Devices” is not run first.
Issue:

Note - This problem only occurs on Emulex HBAs running EFIBoot version 4.12a.15 firmware. If you are running a different version of the HBA firmware, you will not experience this issue.


At the UEFI Driver control HII menu for the Emulex HBA, with Set Boot From San to enabled, if you run the Add Boot Device function, you will see the Please Wait message for approximately 3 to 5 seconds, and then the system hangs. You must reset the server to clear the condition.

However, if you run the Scan Fibre Devices function first, and then you run the Add Boot Device function, the Add Boot Device function works correctly. The hang condition only occurs if the Add Boot Device function is run first.

Affected hardware and software:
  • StorageTek 8 Gb FC PCIe HBA Dual Port Emulex, with EFIBoot version 4.12a.15 firmware (SG-PCIE2FC-EM8-Z and SG-XPCIE2FC-EM8-N)

  • Releases 1.0, 1.1, 1.2, and 1.3

Workaround:

To recover from this hang condition, enter the following command to power cycle the host:

reset /System

15787798 (formerly CR 7164218)
MegaRAID Storage Manager V11.08.03.02 is not able to assign hot spares if the Sun Storage 6 Gb SAS PCIe HBA, Internal based virtual drive is constructed on 3-TB drives using EFI partitioning.
Issue:

MegaRAID Storage Manager V11.08.03.02 is not able to assign hot spares if the virtual drive is based on the Sun Storage 6 Gb SAS PCIe HBA, Internal option card, and is constructed on 3-terabyte (3-TB) drives using Extensible Firmware Interface (EFI) partitioning.

Affected hardware and software:
  • Sun Storage 6 Gb SAS PCIe RAID HBA, Internal (SGX-SAS6-INT-Z and SG-SAS6-INT-Z)

  • MegaRAID Storage Manager V11.08.03.02

  • Releases 1.0, 1.1, 1.2, and 1.3

Workaround:

Use the sas2ircu Utility until the defect in MegaRAID Storage Manager is corrected.

15803551, 15803553 (formerly CR (7183782, 7183789)
On single-processor systems, some Oracle ILOM web interface System Information screens show an incorrect number of Ethernet ports and PCIe ports available for use.
Issue:

On single-processor systems, Ethernet ports NET 2 and NET3, and PCIe slots 1, 2, and 3 are not supported. However, the following Oracle Integrated Lights Out Manager (ILOM) web interface screens incorrectly show these ports as available for use.

  • The Oracle ILOM System Information > Summary screen and the System Information > Networking screen show the number of supported Ethernet NICs (Network Interface Controllers) as 4 when actually only two Ethernet NICs (NET 0 and NET 1) are supported and available for use.

  • The Oracle ILOM System Information > PCI Devices screen shows the Maximum Add-on Devices as 6, when actually only three PCIe slots (slots 4, 5, and 6) are supported and available for use. This screen also shows the number of On-board Devices (NICs) as 4, when actually only NET 0 and NET 1 are supported and available for use.

Affected hardware and software:
  • Single-processor Sun Server X3-2L systems

  • Oracle ILOM 3.1

  • Release 1.1, 1.2, and 1.3

Workaround:

None.

15803117 (formerly CR 7183271)
On servers that are configured with Internal and External Sun Storage 6 Gb SAS PCIe HBA cards, the storage drives are not detected by BIOS at boot time.
Issue:

If the server is configured with a Sun Storage 6 Gb SAS PCIe Internal host bus adapter (HBA) installed in PCIe slot 6 and a Sun Storage 6 Gb SAS PCIe External HBA installed in one of the external PCIe slots (slots 1 through 5), the storage drives are not detected during the BIOS boot. As a result, Pc-Check will not detect and test the internal storage drives and you will not be able to designate an internal storage drive as the boot drive.

Affected hardware and software:
  • Sun Storage 6 Gb SAS PCIe Internal HBA (SG-SAS6-INT-Z and SGX-SAS6-INT-Z)

  • Sun Storage 6 Gb SAS PCIe External HBA (SG-SAS6-EXT-Z and SGX-SAS6-EXT-Z)

  • Release 1.1, 1.2, and 1.3

15803117 (Continued)
Workaround:

To reconfigure the internal and external HBA cards so that the internal storage drives are detected at boot time, perform the following steps:

  1. Reboot the server.

    As the BIOS boots, the LSI Corporation MPT SAS2 BIOS screen appears.

  2. When the “Type Control+C to enter SAS Configuration Utility” message appears, type: Ctrl+C.

    The LSI Corp Config Utility screen appears.

    Note that the internal PCIe card (SG-SAS6-INT-Z) does not appear in the Boot Order column.

  3. Press the right arrow key to select the Boot Order column.

  4. Press the Insert key (Alter Boot List).

    The number 1 is inserted next to the internal PCIe card (SG-SAS6-INT-Z).

  5. To change the boot order, press the - (minus) key (Alter Boot Order).

    The Boot order number for the internal PCIe card is changed to 0 (zero) and the boot order for the external PCIe card (SG-SAS6-EXT-Z) is changed to 1 (one).

  6. Use the arrow keys to select the Boot Order column for the external PCIe card and press the Del key (Alter Boot List) to remove the external PCIe card from the boot order.

  7. To exit from the LSI Corp Config Utility, press the Esc key.

    An exit confirmation screen appears.

  8. In the Exit Confirmation screen, scroll down to “Save Changes and Reboot” and press the Enter key.

  9. When the BIOS screen appears, press the F2 key to enter the BIOS Setup Utility.

    The BIOS Main screen appears.

  10. In the BIOS Main screen, select the Boot option in the menu bar.

    The Boot Menu screen appears.

  11. Verify that the server's internal storage drives are now displayed in the Boot Menu screen.

    You can now select an internal storage drive to be at the top of the boot list order.

15803564 (formerly CR 7183799)
On single-processor systems, some Oracle ILOM CLI commands and web interface System Information screens show an incorrect number of supported DIMM sockets.
Issue:

For the Oracle Integrated Lights Out Manager (ILOM) command-line interface (CLI), the show /System/memory command will incorrectly return max DIMMs = 16, when the maximum number of supported DIMMs on a single-processor system is 8.

Additionally, if a DIMM is mistakenly installed in a socket associated with the processor 1 (P1) socket, the following Oracle ILOM CLI commands will identify the misconfiguration by showing the DIMM associated with P1, even though P1 is not actually present in the system. Note, however, that the DIMM will not be usable by the system.

  • -> show /System/Memory/DIMMs

  • -> show /System/Memory/DIMMs/DIMM_n, where n can be any number from 8 through 15

  • -> show /SP/powermgmt/powerconf/memory

  • -> show /SP/powermgmt/powerconf/memory/MB_P1_D0

For the Oracle ILOM web interface, the System Information > Summary screen and the System Information > Memory screen incorrectly show the maximum number of supported DIMMs as 16, when the maximum number of supported DIMMs on a single-processor system is 8.

Affected hardware and software:
  • Single-processor Sun Server X3-2L systems

  • Oracle ILOM 3.1

  • Release 1.1

Workaround:

None.