9 Servicing DIMMs
This section describes how to service dual in-line memory modules (DIMMs).
DIMMs are replaceable components that require you to power off the server before servicing. For more information about replaceable components, see Illustrated Parts Breakdown and Replaceable Components.
Caution:
These procedures require that you handle components that are sensitive to electrostatic discharge. This sensitivity can cause the components to fail. To avoid damage, ensure that you follow antistatic practices as described in Electrostatic Discharge Safety.Caution:
Ensure that all power is removed from the server before removing or installing DIMMs, or damage to the DIMMs might occur. You must disconnect all power cables from the system before performing these procedures.The topics and procedures in this section provide information to assist you when replacing a DIMM or upgrading DIMMs.
DIMM and Processor Physical Layout
The physical layout of the DIMMs and processor(s) is shown in the following figure. When viewing the server from the front, processor 0 (P0) is on the left.
![Figure showing the AMD DIMM and processor layout. Figure showing the AMD DIMM and processor layout.](img/mm-10524_-dimm-processor-layout.png)
Each processor, P0 and P1, has twelve DIMM slots (D0-D11), six on each processor side. Each DIMM slot supports a single memory channel, for total of twelve DDR5 memory channels per processor (0-11).
Table 9-1 Memory Channels and DIMM Slots for P0 and P1
Memory Channel | DIMM Slot |
---|---|
0 |
D3 |
1 |
D1 |
2 |
D0 |
3 |
D5 |
4 |
D4 |
5 |
D2 |
6 |
D8 |
7 |
D10 |
8 |
D11 |
9 |
D6 |
10 |
D7 |
11 |
D9 |
Note:
In single-processor systems, the DIMM slots associated with processor 1 (P1) are nonfunctional and should not be populated with DIMMs.DIMM Population Scenarios
There are two scenarios in which you are required to populate DIMMs:
-
A DIMM fails and needs to be replaced.
In this scenario, you can use the Fault Remind button to determine the failed DIMM, and then remove the failed DIMM and replace it. To ensure that system performance is maintained, you must replace the failed DIMM with a DIMM of the same size (in gigabytes) and type (quad-rank or dual-rank). In this scenario, do not change the DIMM configuration.
-
You purchased new DIMMs and want to use them to upgrade the server memory.
In this scenario, you must adhere to the DIMM population rules and follow the recommended DIMM population order for optimal system performance.
DIMM Population Rules
The population rules for adding DIMMs to Exadata Server X10M are as follows:
-
The server supports:
-
Up to 24 DDR5 DIMMs, 12 per processor socket.
-
32 GB, 64 GB, 96 GB, and 128 GB dual-rank (DR) Registered DIMMs (RDIMMS).
-
Up to 3 TB memory when populating the 24 DIMM slots with 128-GB DIMMs.
-
1 DIMM per channel (1DPC). Each DIMM channel is composed of a black slot.
-
A maximum supported memory speed of 4800 MT/s.
However, the maximum attainable memory speed could be limited by the maximum speed supported by a specific processor or DIMM. All memory installed in the system operates at the same speed, or frequency.
-
- Populate all 12 memory DIMMs per processor to achive the highest system performance. If populating 12 memory DIMMs per processor is not feasible, populate each processor with 1, 2, 4, 6, 8, or 10 DIMMs.
-
Populate each memory channel with the same capacity and number of banks.
-
Populate processor 0 (P0) and processor 1 (P1) using the same DIMM configuration. Failure to do so will result in lower system performance.
-
The server operates properly with a minimum of one DIMM installed per processor. Install one DIMM in slot D5 on each processor.
-
Each DIMM is shipped with a label identifying its rank classification (dual or quad). The following identifies the label corresponding to the supported DIMM rank classification: Dual-rank RDIMM 2Rx4
-
Do not mix DIMM sizes in a server even if they are the same type. For example, you cannot mix 96 GB RDIMMs with 64 GB RDIMMs on the same server.
-
Do not mix DIMM types in a server. Load-Reduced (LRDIMMs) are not supported.
-
Do not mix DIMM module types within a memory channel. All DIMM module types must be RDIMM module types, with the same ECC configuration.
-
Do not mix x4 and x8 DIMMs within a memory channel.
-
The server does not support lockstep memory mode, which is also known as double device data correction, or Extended ECC.
-
Populate the DIMM slots in the order described in the following sections, which provide an example of how to populate the DIMM slots to achieve optimal system performance.
Populating DIMMs for Optimal System Performance
Optimal performance is generally achieved by populating the DIMMs so that the memory is symmetrical, or balanced. Symmetry is achieved by adhering to the following guidelines:
-
Populate DIMMs of the same size in multiples of twelve (six for each processor).
-
The DIMM population for each processor (P0 and P1) must be identical.
-
DIMM channels per processor are arranged from left to right (as you face the front of the system) in the following order:
F E D C B A G H I J K L
Table 9-2 DIMM Memory Slot Population Requirements
DIMMs per CPU | Channel Population Order | DIMM Memory Slots CPU-0/1 |
---|---|---|
Memory Channel: |
|
|
12 DIMMs per CPU |
Number Installed |
|
10 DIMMs per CPU |
Number Installed |
|
8 DIMMs per CPU |
Number Installed |
|
6 DIMMs per CPU |
Number Installed |
|
4 DIMMs per CPU |
Number Installed |
|
2 DIMMs per CPU |
Number Installed |
|
1 DIMM per CPU |
Number Installed |
|
Note:
AMD only tests and validates the population ordering documented. Other populations may work but are NOT supported.Dual or single processor requirements are as follows:
-
Populating DIMMs in Dual-Processor Systems for Optimal System Performance
-
Populating DIMMs in Single-Processor Systems for Optimal System Performance
Populating DIMMs in Dual-Processor Systems for Optimal System Performance
In dual-processor systems, install DIMMs into DIMM slots starting with processor 0 (P0) D5, alternating between slots associated with processor 0 (P0) and matching slots for processor 1 (P1).
The following list describes the order in which to install DIMMs in a dual-processor system, which correspond to DIMM slot labels D0 through D11.
DIMM population order on dual-processor systems
-
P0/D5
-
P1/D5
-
P0/D6
-
P1/D6
-
P0/D3
-
P1/D3
-
P0/D8
-
P1/D8
-
P0/D4
-
P1/D4
-
P0/D7
-
P1/D7
-
P0/D1
-
P1/D1
-
P0/D10
-
P1/D10
-
P0/D2
-
P1/D2
-
P0/D9
-
P1/D9
-
P0/D0
-
P1/D0
-
P0/D11
-
P1/D11
Populating DIMMs in Single-Processor Systems for Optimal System Performance
In single-processor systems, install DIMMs only into DIMM slots associated with processor 0 (P0).
The following list describes the order in which to populate DIMMs in a single-processor system, which correspond to DIMM slot labels D0 through D11.
DIMM population order on single-processor systems
-
P0/D5
-
P0/D6
-
P0/D3
-
P0/D8
-
P0/D4
-
P0/D7
-
P0/D1
-
P0/D10
-
P0/D2
-
P0/D9
-
P0/D0
-
P0/D11
Using the Server Fault Remind Button
When you press the server Fault Remind button [1], an LED located next to the Fault Remind button lights green which indicates that there is sufficient voltage present in the fault remind circuit to light any fault LEDs that were lit due to a component failure. If this LED does not light when you press the Fault Remind button, it is likely that the capacitor powering the fault remind circuit has lost its charge. This can happen if the Fault Remind button is pressed for several minutes with fault LEDs lit, or if power is removed from the server for more than 15 minutes.
The following figure shows the location of the Fault Remind button on the motherboard.
![Figure showing the location of the 4-Drive Fault Remind Button. Figure showing the location of the 4-Drive Fault Remind Button.](img/mm-10527_fault-remind-button-location.png)