JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Sun Server X2-4 (formerly Sun Fire X4470 M2)

Service Manual

search filter icon
search icon

Document Information

Using This Documentation

1.  Sun Server X2-4 Service Manual Overview

1.1 System Overview

1.1.1 Intel Xeon E7 Platform

1.1.2 Block Diagrams

1.1.3 Processors (CPUs)

1.1.4 Memory

1.1.5 Cooling

1.1.6 Input/Output (I/O)

1.1.7 Summary of Supported Components and Capabilities

1.2 Server Front Panel Features

1.3 Server Back Panel Features

1.4 Performing Service Related Tasks

2.  Preparing to Service the Sun Server X2-4

2.1 Location of Replaceable Components

2.2 Tools and Equipment Needed

2.3 Performing Electrostatic Discharge and Static Prevention Measures

2.3.1 Using an Antistatic Wrist Strap

2.3.2 Using an Antistatic Mat

2.4 Positioning the Server for Maintenance

Extend the Server to the Maintenance Position

2.5 Releasing the Cable Management Arm

Release the CMA

2.6 Powering Off the Server

Power Off the Server Using the Service Processor Command-Line Interface

2.7 Removing the Server Top Cover

Remove the Server Top Cover

2.8 Removing or Installing Filler Panels

2.9 Attaching Devices to the Server

2.9.1 Connector Locations

2.9.2 Cabling the Server

3.  Servicing CRU Components That Do Not Require Server Power Off

3.1 Servicing Disk Drives (CRU)

3.1.1 Disk Drive Status LED Reference

3.1.2 Removing and Installing Disk Drives and Disk Drive Filler Panels

Remove a Disk Drive Filler Panel

Remove a Disk Drive

Install a Disk Drive

Install a Disk Drive Filler Panel

3.2 Servicing Fan Modules (CRU)

3.2.1 About Server Fans

3.2.2 Fan Module LED Reference

3.2.3 Detecting Fan Module Failure

3.2.4 Removing and Installing Fan Modules

Remove a Fan Module

Install a Fan Module

3.3 Servicing Power Supplies (CRU)

3.3.1 Power Supply LED Reference

3.3.2 Detecting a Power Supply Failure

3.3.3 Removing and Installing Power Supplies

Remove a Power Supply

Install a Power Supply

4.  Servicing CRU Components That Require Server Power Off

4.1 Servicing Memory Risers and DIMMs (CRU)

4.1.1 CPUs, Memory Risers, and DIMMs Physical Layout

4.1.2 Memory Riser Population Rules

4.1.3 Memory Riser DIMM Population Rules

4.1.4 Memory Performance Guidelines

4.1.5 DIMM Fault Isolation

4.1.6 Supported DIMMs

4.1.7 Unsupported DIMMs

4.1.8 Removing and Installing Memory Risers, DIMMs, and Filler Panels

Remove a Memory Riser Filler Panel

Remove a DIMM Filler Panel

Remove a Memory Riser and DIMM

Install Memory Risers and DIMMs

Install a Memory Riser Filler Panel

Install a DIMM Filler Panel

4.2 Servicing PCIe Cards (CRU)

4.2.1 PCIe Card Configuration Rules

4.2.2 PCIe Cards With Bootable Devices

4.2.3 Avoiding PCI Resource Exhaustion Errors

4.2.4 Removing and Installing PCIe Cards and PCIe Card Filler Panels

Remove a PCIe Card Filler Panel

Remove a PCIe Card

Install a PCIe Card

Install a PCIe Card Filler Panel

4.3 Servicing the DVD Drive and DVD Driver Filler Panel (CRU)

Remove the DVD Drive or DVD Drive Filler Panel

Install the DVD Drive or DVD Drive Filler Panel

4.4 Servicing the System Lithium Battery (CRU)

Remove the System Battery

Install the System Battery

5.  Servicing FRU Components

5.1 Servicing the CPU and Heatsink (FRU)

5.1.1 CPU Placement

5.1.2 Removing and Installing a Heatsink Filler Panel, CPU Cover Plate, Heatsink, and CPU

5.2 Servicing the Fan Board (FRU)

Remove the Fan Board

Install the Fan Board

5.3 Servicing the Power Supply Backplane (FRU)

Remove the Power Supply Backplane

Install the Power Supply Backplane

5.4 Servicing the Disk Drive Backplane (FRU)

Remove the Disk Drive Backplane

Install the Disk Drive Backplane

5.5 Servicing the Motherboard (FRU)

Remove the Motherboard

Install the Motherboard

6.  Returning the Server to Operation

6.1 Replacing the Server Top Cover

Replace the Server Top Cover

6.2 Returning the Server to the Normal Rack Position

Return the Server to the Normal Rack Position

6.3 Powering On the Server

Power On the Server

7.  Servicing the Server at Boot Time

7.1 Powering On the Server

7.2 About the BIOS

7.3 Default BIOS Power-On Self-Test (POST) Events

7.4 BIOS POST F1 and F2 Errors

7.5 How BIOS POST Memory Testing Works

7.6 Ethernet Port Device and Driver Naming

7.6.1 Ethernet Port Booting Priority

7.7 BIOS Setup Utility Menus

7.8 Performing Common BIOS Procedures

Access the BIOS Setup Utility

Reset the BIOS Password

Configure Support for TPM

Configure SP LAN Settings

Configure Option ROM Settings

7.8.1 Configuring Serial Port Sharing

7.9 BIOS and SP Updates

7.10 BIOS Configuration Tool

8.  Troubleshooting the Server and ILOM Defaults

8.1 Troubleshooting the Server

8.2 Diagnostic Tools

8.2.1 Diagnostic Tool Documentation

8.3 Using the Preboot Menu Utility

8.3.1 Accessing the Preboot Menu

8.3.2 Restoring Oracle ILOM to Default Settings

8.3.3 Restoring Oracle ILOM Access to the Serial Console

8.3.4 Restoring the SP Firmware Image

8.3.5 Preboot Menu Command Summary

8.4 Contacting Support

8.5 Locating the Chassis Serial Number

A.  Server Specifications

A.1 Physical Specifications

A.2 Electrical Specifications

A.3 Environmental Requirements

B.  BIOS Setup Utility Menus

B.1 BIOS Main Menu Selections

B.2 BIOS Advanced Menu Selections

B.3 BIOS PCIPnP Menu Selections

B.4 BIOS Boot Menu Selections

B.5 BIOS Security Menu Selections

B.6 BIOS IO/MMIO Menu Selections

B.7 BIOS Chipset Menu Selections

B.8 BIOS Exit Menu Selections

C.  Connector Pinouts

C.1 USB Connectors

C.2 Serial Connector

C.3 Gigabit-Ethernet Connectors

C.4 Network Management Port Connector

C.5 Video Connectors

C.6 Serial Attached SCSI (SAS) Connector

D.  Getting Server Firmware and Software

D.1 Firmware and Software Updates

D.2 Firmware and Software Access Options

D.3 Available Software Release Packages

D.4 Accessing Firmware and Software

Download Firmware and Software Using My Oracle Support

D.4.1 Requesting Physical Media

D.4.2 Gathering Information for the Physical Media Request

D.5 Installing Updates

D.5.1 Installing Firmware

D.5.2 Installing Hardware Drivers and OS Tools


1.1 System Overview

Oracle's Sun Server X2-4 is a 3 rack unit (RU) rackmount server that uses the Intel Xeon E7 platform. This section describes the major features, components, and capabilities of the server.

1.1.1 Intel Xeon E7 Platform

The Intel Xeon E7 platform is based on the Intel Xeon Processor E7-4800 Series and uses the Intel 7500 Chipset I/O hub (IOH) as its primary chipset. The platform uses the Intel QuickPath Interface (QPI), a high-speed, differentially signaled, point-to-point interface that forms a communication fabric among the processors (CPUs) and IOHs in the system.

The Sun Server X2-4 uses two Intel 7500 Chipset I/O hubs, each connected to two of the four CPUs. One of these I/O hubs is designated the legacy I/O hub and has a connection to the Intel I/O Controller Hub 10 (ICH10) southbridge component.

1.1.2 Block Diagrams

Four CPU Block Diagram shows a block diagram for a Sun Server X2-4 with four CPUs.

Two CPU Block Diagram shows a block diagram for a Sun Server X2-4 with two CPUs.

Note - In the diagrams, the PCIe SAS/RAID Controller is shown as installed in Slot 2. If a particular SAS/RAID Controller has specific cooling requirements, it might have to be installed in Slot 4. For information about cooling requirements, refer to the Sun Server X2-4 Product Notes.

Figure 1-1 Four CPU Block Diagram

image:Figure showing four-CPU block diagram.

Figure 1-2 Two CPU Block Diagram

image:Figure showing two-CPU block diagram.

1.1.3 Processors (CPUs)

The Sun Server X2-4 supports two or four processors (CPUs), as shown in Four CPU Block Diagram and Two CPU Block Diagram. The two-CPU configuration must have CPUs (with heatsinks) in sockets 0 and 2 and heatsink filler panels installed in sockets 1 and 3.

In a two-CPU configuration, all three QPI interconnects and both CPUs must be operational. The four-CPU configuration offers a greater level of resiliency with redundant QPI interconnects that allow working CPUs to route around a disabled CPU as the system starts.

Features of each Intel Xeon Processor E7-4800 Series include:

Note - For more information about Intel QuickPath Interconnects, refer to Weaving High Performance Multiprocessor Fabric from Intel Press at

1.1.4 Memory

Each CPU in the Sun Server X2-4 has four SMI channels leading to Intel 7510 Scalable Memory Buffers (located on two memory risers). Each memory buffer has an SMI link to the CPU and two DDR3 interfaces. Each SMI interface can operate at speeds of 6.4 GT/s, which correspond to DDR3 operation at 1067 MT/s. From the CPU to the Intel 7510 Scalable Memory Buffer, the SMI interface supports 11 lanes (9 data + 1 CRC + 1 spare). From the Intel 7510 Scalable Memory Buffer to the CPU, the SMI interface supports 14 lanes (12 data + 1 CRC + 1 spare). The CPU retries memory transactions that incur a CRC error. For persistent errors, the SMI link has spare lanes for automatic self-healing.

The system supports a maximum of eight memory risers (4 CPU configuration) or four memory risers (2 CPU configuration). Each riser houses 8 DIMM slots for the four DDR3 channels. The system can operate with 0, 2, 4, 6 or 8 DIMMs on a given riser. For maximum performance, install at least two ranks of DIMMs on every available DDR3 channel (for example, 4 DIMMs per riser with two risers per CPU).Each of two memory controllers in a CPU operates its two SMI channels as a lock-step pair. The memory controller treats each pair of DDR3 channels behind the two memory buffers as a 144-bit-wide DRAM interface. As a result, the DIMMs must be installed in pairs, with identical DIMMs in each pair.

The DDR3 interfaces include the following features:

For more information about CPUs, memory risers, and memory layout, including guidelines for populating memory risers and DIMMs, see 4.1 Servicing Memory Risers and DIMMs (CRU).

Memory Architecture shows the architecture of the server memory.

Figure 1-3 Memory Architecture

image:Figure showing memory architecture.

1.1.5 Cooling

The Sun Server X2-4 is cooled from front to back. Cooling occurs in two areas of the chassis, separated by a plastic dividing wall. In the power supply cooling zone, fans at the back of the power supplies cool the drive bays as well as the power supplies, by drawing air into the depressurized zone at the right of the chassis. In the main cooling zones, six 92-mm high-performance fans, arranged in two rows for redundancy, cool the motherboard, memory risers, and I/O cards. The motherboard is divided into three zones and each pair of fans is separately regulated to cool that zone. Since the main cooling zones are pressurized, it is important to maintain the seal of the dividing wall so that the power supply units can draw air through the drive bay.

The unrestricted airflow over the motherboard minimizes system noise. Dividing the cooling into zones allows for greater use of system resources, since each zone can operate independently at its highest efficiency.

Server Cooling Zones shows the cooling zones.

Figure 1-4 Server Cooling Zones

image:Figure showing cooling zones in server.

Figure Legend

1 Power supply cooling zone

2 Chassis cooling zone 2

3 Chassis cooling zone 1

4 Chassis cooling zone 0

1.1.6 Input/Output (I/O)

For internal storage, the server chassis provides:

In addition, the service processor can present virtual USB storage devices to the system.The ICH10 southbridge on the motherboard provides six built-in SATA2 (3-Gbit/s) ports, accessible through two SAS4I connectors (Port 0-3 and Port 4-5). When configured with any 2.5-inch SAS drives, the system must be equipped with one PCI Express (PCIe) Gen-2 internal HBA card to support the front 2.5-inch drive bays.Each offered PCIe Gen-2 HBA has 8 SAS2/SATA2 internal ports accessible through two SAS4I connectors (Port 0-3 and Port 4-7). Since the drive cage has only six bays, Port 6-7 of an internal HBA are not used in this system.

With an internal SAS-2 HBA card installed in a PCIe slot, the six bays can handle any combination of supported SAS and SATA hard disk drives (HDDs) and solid-state drives (SSDs). If the disk backplane is connected to the built-in ICH10 SATA-2 controller rather than an HBA card, only SATA storage devices will operate. (When a RAID volume is configured on the HBA card, the drive bays for the RAID members must hold the same type of storage device.)

1.1.7 Summary of Supported Components and Capabilities

The following table summarizes the components and capabilities of the Sun Server X2-4.

Table 1-1 Sun Server X2-4 Components and Capabilities

Sun Server X2-4
Processor (CPU)
Supported configurations:
  • Two processors installed in socket 0 and socket 2

  • Four processors installed in sockets 0 through 3

For the latest information on CPU specifications, go to the Sun x86 Servers web site and navigate to the Sun Server X2-4 page:

Up to eight memory riser modules are supported (two risers per CPU) in the server chassis. Each riser module supports eight PC3L RDIMMs, allowing up to sixteen RDIMMs per processor.
  • A 2-socket system using four riser modules populated with 16-GB RDIMMs supports a maximum of 512 GB of system memory.

  • A 4‚Äźsocket system using eight riser modules populated with 16-GB RDIMMs supports a maximum of 1024 GB of system memory.

Storage devices
For internal storage, the server chassis provides:
  • Six 2.5-inch drive bays, accessible through the front panel. The supported drive interfaces for each bay depend on the HBA chosen.

  • An optional slot-loading DVD+/-RW drive on front of the server, below the drive bays. This SATA DVD connects to a USB-SATA bridge, so that is appears to the system software as a USB storage device.

  • One internal high-speed USB port on the motherboard. This port can hold a USB flash device for system booting.

USB 2.0 ports
Two front, two rear, and one internal.
VGA ports
One front and one rear high-density DB-15 video ports.

Note - The rear VGA port supports VESA Device Data Channel for monitor identification.

PCI Express 2.0 I/O slots
Ten PCI Express 2.0 slots that accommodate low-profile PCIe cards. All slots support x8 PCIe connectors. Two slots are also capable of supporting x16 PCIe connectors.
  • Slots 0 and 9: x4 electrical interface

  • Slots 1, 2, 4, 6, 7, and 8: x8 electrical interface

  • Slots 3 and 5: x8 or x16 electrical interface (x16 connector)

Note - PCI Express slots 3 and 5 will operate as x16 interfaces only when an x16 capable card is installed and the adjacent slot (4 or 6) is unpopulated.

Cluster card slot
One specialized slot dedicated for use in storage appliances. The Sun Server X2-4 does not support populating this slot with standard PCIe cards.
PCI Express I/O cards
For a list of I/O cards that are customer-orderable options, go to the Sun x86 Servers web site and navigate to the Sun Server X2-4 page:
Ethernet ports
Four 10/100/1000 RJ-45 GbE ports on rear panel.

Each Network Interface Controller (NIC) supports Intel QuickData Technology, Intel I/OAT, VMDq, PCI-SIG SR-IOV, IPSec offload, and LinkSec.

Service processor
Integrated Baseboard Management Controller (BMC), which supports the industry-standard IPMI feature set.

Supports remote KVMs, DVD, and floppy over IP (optional license required).

Includes serial port.

Supports Ethernet access to SP through a dedicated 10/100BaseT management port and optionally through one of the host GbE ports (sideband management).

Power supplies
Two hot-swappable power supplies, each with 2000 Watts capacity (from 200 Volts to 240 Volts), auto-ranging, light load efficiency mode and redundant over-subscription.
Cooling fans
Six hot-swappable, redundant fans at chassis front (top-loading); redundant fans at each power supply.
Management software
Oracle Integrated Lights Out Manager (ILOM)