2 Maintaining Exadata Database Servers
Note:
For ease of reading, the name "Oracle Exadata Rack" is used when information refers to both Oracle Exadata and Oracle Exadata Storage Expansion Rack.- Management Server on Database Servers
- Maintaining the Local Storage on Exadata Database Servers
Repair of the local drives does not require an Oracle Exadata Database Machine database server to be shut down. - Maintaining Flash Disks on Exadata Database Servers
Flash disks should be monitored and replaced when necessary. - Adding the Disk Expansion Kit to Database Servers
You can add local storage space to an Oracle Exadata Database Server by using a disk expansion kit. - Adding Memory Expansion Kit to Database Servers
- Verifying and Modifying the Link Speed on the Client Network Ports for X7 and Later Systems
You can configure 10 GbE connections or 25 GbE connections on the client network on Oracle Exadata X7 and later database servers. - Adding and Configuring an Extra Network Card on Oracle Exadata X6-2 and Later
You can add an additional network card on Oracle Exadata X6-2 and later systems. - Increasing the Number of Active Cores on Database Servers
You can increase the number of active cores on Oracle Exadata using capacity-on-demand. - Extending LVM Partitions
Logical Volume Manager (LVM) provides flexibility to reorganize the partitions in the database servers. - Creating a Snapshot-Based Backup of Oracle Linux Database Server
- Recovering Oracle Linux Database Servers Using a Snapshot-Based Backup
You can recover a database server file systems running Oracle Linux using a snapshot-based backup after severe disaster conditions happen for the database server, or when the server hardware is replaced to such an extent that it amounts to new hardware. - Re-Imaging the Oracle Exadata Database Server
The re-image procedure is necessary when a database server needs to be brought to an initial state for various reasons. - Changing Existing Elastic Configurations for Database Servers
Elastic configurations provide a flexible and efficient mechanism to change the server configuration of your Oracle Exadata. - Managing Quorum Disks for High Redundancy Disk Groups
Quorum disks allow for the Oracle RAC voting files to be stored in a high redundancy disk group on an Oracle Exadata Rack with fewer than five storage servers due to the presence of two extra failure groups. - Using vmetrics
The vmetrics package enables you to display system statistics gathered by the vmetrics service. - Using FIPS mode
On database servers running Oracle Linux 7 or later, you can enable the kernel to run in FIPS mode. - Exadata Database Server LED Indicator Descriptions
The indicator LEDs on Oracle Exadata database servers help you to verify the system status and identify components that require servicing. - Exadata Database Server Images
The Exadata database server models have different external layouts and physical appearance.
2.1 Management Server on Database Servers
Management Server (MS) running on database servers provides monitoring, alerting, and other administrative capabilities. It also provides the DBMCLI command-line administration tool.
See Also:
-
"Management Server on Database Servers" in the Oracle Exadata Database Machine System Overview guide
Parent topic: Maintaining Exadata Database Servers
2.2 Maintaining the Local Storage on Exadata Database Servers
Repair of the local drives does not require an Oracle Exadata Database Machine database server to be shut down.
No downtime of the rack is required, however the individual server may require downtime, temporarily taking it out of the cluster.
- Verifying the Database Server Configuration
Oracle recommends verifying the status of the database server RAID devices to avoid possible performance impact, or an outage. - Monitoring a Database Server RAID Set Rebuilding
- Reclaiming a Hot Spare Drive After Upgrading to Oracle Exadata System Software Release 12.1.2.1.0 or Later
- Understanding Automated File Deletion Policy
Parent topic: Maintaining Exadata Database Servers
2.2.1 Verifying the Database Server Configuration
Oracle recommends verifying the status of the database server RAID devices to avoid possible performance impact, or an outage.
The impact of validating the RAID devices is minimal. The impact of corrective actions will vary depending on the specific issue uncovered, and may range from simple reconfiguration to an outage.
- About the RAID Storage Configuration
The local storage drives are configured in a RAID configuration. - Verifying the Disk Controller Configuration on Systems Without a RAID Controller
- Verifying the Disk Controller Configuration on Systems With a RAID Controller
- Verifying Virtual Drive Configuration
- Verifying Physical Drive Configuration
Check your system for critical, degraded, or failed disks.
2.2.1.1 About the RAID Storage Configuration
The local storage drives are configured in a RAID configuration.
Table 2-1 Disk Configurations for Exadata Database Machine Two-Socket Systems
Server Type | RAID Controller | Disk Configuration |
---|---|---|
Oracle Exadata Database Machine X9M-2 |
N/A |
Two mirrored (RAID-1) NVMe flash drives in each database server |
Oracle Exadata Database Machine X8M-2 |
MegaRAID SAS 9361-16i |
4 disk drives in a RAID-5 configuration on each database server |
Oracle Exadata Database Machine X8-2 |
MegaRAID SAS 9361-16i |
4 disk drives in a RAID-5 configuration on each database server |
Oracle Exadata Database Machine X7-2 |
MegaRAID SAS 9361-16i |
4 disk drives in a RAID-5 configuration on each database server |
Oracle Exadata Database Machine X6-2 |
MegaRAID SAS 9361-8i |
4 disk drives in a RAID-5 configuration on each database server |
Oracle Exadata Database Machine X5-2 |
MegaRAID SAS 9361-8i |
4 disk drives in a RAID-5 configuration on each database server |
Oracle Exadata Database Machine X4-2 |
MegaRAID SAS 9261-8i |
4 disk drives in a RAID-5 configuration on each database server |
Oracle Exadata Database Machine X3-2 |
MegaRAID SAS 9261-8i |
4 disk drives in a RAID-5 configuration on each database server |
Oracle Exadata Database Machine X2-2 |
MegaRAID SAS 9261-8i |
4 disk drives in a RAID-5 configuration on each database server |
Table 2-2 Disk Configurations for Exadata Database Machine Eight-Socket Systems
Server Type | RAID Controller | Disk Configuration |
---|---|---|
Oracle Exadata Database Machine X9M-8 |
N/A |
Two mirrored (RAID-1) NVMe flash drives in each database server |
Oracle Exadata Database Machine X8M-8 |
N/A |
Two mirrored (RAID-1) NVMe flash drives in each database server |
Oracle Exadata Database Machine X8-8 |
N/A |
Two mirrored (RAID-1) NVMe flash drives in each database server |
Oracle Exadata Database Machine X7-8 |
N/A |
Two mirrored (RAID-1) NVMe flash drives in each database server |
Oracle Exadata Database Machine X5-8 |
MegaRAID SAS 9361-8i |
8 disk drives in each database server with one virtual drive created across the RAID-5 set |
Oracle Exadata Database Machine X4-8 |
MegaRAID SAS 9261-8i |
7 disk drives in each database server configured as one 6-disk RAID-5 with one global hot spare drive by default |
Oracle Exadata Database Machine X3-8 |
MegaRAID SAS 9261-8i |
8 disk drives in each database server with one virtual drive created across the RAID-5 set |
Parent topic: Verifying the Database Server Configuration
2.2.1.2 Verifying the Disk Controller Configuration on Systems Without a RAID Controller
Commencing with Oracle Exadata X9M-2, two-socket Exadata database servers have no RAID controller. For eight-socket systems, all models starting with Oracle Exadata X7-8 have no RAID controller.
If your output is substantially different, then investigate and correct the problem. In particular, degraded virtual drives usually indicate absent or failed physical disks. Disks that show [1/2] and [U_] or [_U] for the state indicate that one of the NVME disks is down. Failed disks should be replaced quickly.
Parent topic: Verifying the Database Server Configuration
2.2.1.3 Verifying the Disk Controller Configuration on Systems With a RAID Controller
On Oracle Exadata X8M-2 and earlier, two-socket Exadata database servers contain a RAID controller. For eight-socket systems, all models up to Oracle Exadata X6-8 contain RAID controllers.
On two-socket systems, the expected output is one virtual drive (with none degraded or offline), five physical devices (one controller and four disks), and four disks (with no critical or failed disks).
On eight-socket systems, the expected output is one virtual drive (with none degraded or offline) and eight disks (with no critical or failed disks). The number of physical devices is 9 (one controller and eight disks) plus the number of SAS2 expansion ports (where relevant).
If your output is different, then investigate and correct the problem. Degraded virtual drives usually indicate absent or failed physical disks. Critical disks should be replaced immediately to avoid the risk of data loss if the number of failed disks in the node exceed the count needed to sustain the operations of the system. Failed disks should also be replaced quickly.
Note:
If additional virtual drives or a hot spare are present, then it may be that
the procedure to reclaim disks was not performed at deployment time or that a bare metal
restore procedure was performed without using the dualboot=no
qualifier.
Contact Oracle Support Services and reference My Oracle Support note 1323309.1 for additional information and corrective
steps.
When upgrading a database server that has a hot spare to Oracle Exadata System Software release 11.2.3.2.0 or later, the hot spare is removed, and added as an active drive to the RAID configuration.
Example 2-1 Checking the disk controller configuration on a 2-socket system without the disk expansion kit
The following is an example of the expected command output on a 2-socket system without the disk expansion kit.
Device Present
================
Virtual Drives : 1
Degraded : 0
Offline : 0
Physical Devices : 5
Disks : 4
Critical Disks : 0
Failed Disks : 0
Parent topic: Verifying the Database Server Configuration
2.2.1.4 Verifying Virtual Drive Configuration
To verify the virtual drive configuration, use the following command to verify the virtual drive configuration:
Note:
If you are running Oracle Exadata System Software 19.1.0 or later, substitute/opt/MegaRAID/storcli/storcli64
for /opt/MegaRAID/MegaCli/MegaCli64
in the following commands:
/opt/MegaRAID/MegaCli/MegaCli64 CfgDsply -aALL | grep "Virtual Drive:"; \
/opt/MegaRAID/MegaCli/MegaCli64 CfgDsply -aALL | grep "Number Of Drives"; \
/opt/MegaRAID/MegaCli/MegaCli64 CfgDsply -aALL | grep "^State"
The following is an example of the output for Oracle Exadata Database Machine X4-2, Oracle Exadata Database Machine X3-2 and Oracle Exadata Database Machine X2-2. The virtual device 0 should have four drives, and the state is Optimal
.
Virtual Drive : 0 (Target Id: 0)
Number Of Drives : 4
State : Optimal
The expected output for Oracle Exadata Database Machine X3-8 Full Rack and Oracle Exadata Database Machine X2-8 Full Rack displays the virtual device has eight drives and a state of optimal.
Note:
If a disk replacement was performed on a database server without using the dualboot=no
option, then the database server may have three virtual devices. Contact Oracle Support and reference My Oracle Support note 1323309.1 for additional information and corrective steps.
Parent topic: Verifying the Database Server Configuration
2.2.1.5 Verifying Physical Drive Configuration
Check your system for critical, degraded, or failed disks.
To verify physical drive configuration, use the following command to verify the database server physical drive configuration:
Note:
If you are running Oracle Exadata System Software 19.1.0 or later, substitute/opt/MegaRAID/storcli/storcli64
for /opt/MegaRAID/MegaCli/MegaCli64
in the following commands:
/opt/MegaRAID/MegaCli/MegaCli64 -PDList -aALL | grep "Firmware state"
The following is an example of the output for Oracle Exadata Database Machine X4-2, Oracle Exadata Database Machine X3-2, and Oracle Exadata Database Machine X2-2:
Firmware state: Online, Spun Up
Firmware state: Online, Spun Up
Firmware state: Online, Spun Up
Firmware state: Online, Spun Up
The drives should show a state of Online, Spun Up
. The order of the output is not important. The output for Oracle Exadata Database Machine X3-8 Full Rack or Oracle Exadata Database Machine X2-8 Full Rack should be eight lines of output showing a state of Online, Spun Up
.
If your output is different, then investigate and correct the problem.
Degraded virtual drives usually indicate absent or failed physical disks. Critical disks should be replaced immediately to avoid the risk of data loss if the number of failed disks in the node exceed the count needed to sustain the operations of the system. Failed disks should be replaced quickly.
Parent topic: Verifying the Database Server Configuration
2.2.2 Monitoring a Database Server RAID Set Rebuilding
If a drive in a database server RAID set is replaced, then the progress of the RAID set rebuild should be monitored.
Use the following command on the database server that has the replaced disk. The command is run as the root
user.
Note:
If you are running Oracle Exadata System Software 19.1.0 or later, substitute/opt/MegaRAID/storcli/storcli64
for /opt/MegaRAID/MegaCli/MegaCli64
in the following commands:
/opt/MegaRAID/MegaCli/MegaCli64 -pdrbld -showprog -physdrv \ [disk_enclosure:slot_number] -a0
In the preceding command, disk_enclosure and slot_number indicate the replacement disk identified by the MegaCli64 -PDList
command. The following is an example of the output from the command:
Rebuild Progress on Device at Enclosure 252, Slot 2 Completed 41% in 13 Minutes.
2.2.3 Reclaiming a Hot Spare Drive After Upgrading to Oracle Exadata System Software Release 12.1.2.1.0 or Later
Oracle Exadata Database Machines upgraded to Oracle Exadata System Software release 12.1.2.1.0 or later that have a hot spare drive cannot use the reclaimdisks.sh
script to reclaim the drive. The following procedure describes how to manually reclaim the drive:
Note:
During the procedure, the database server is restarted twice. The steps in the procedure assume that the Oracle Grid Infrastructure restart is disabled after the server restart.
The sample output shows Oracle Exadata Database Machine X2-2 database server with four disks. The enclosure identifier, slot number, and such may be different for your system.
Note:
If you are running Oracle Exadata System Software 19.1.0 or later, substitute the string/opt/MegaRAID/storcli/storcli64
for /opt/MegaRAID/MegaCli/MegaCli64
in the following commands:
2.2.4 Understanding Automated File Deletion Policy
Management Server (MS) includes a file deletion policy for the /
(root) directory on the database servers that is triggered when file system utilization is high. Deletion of files is triggered when file utilization is 80 percent, and an alert is sent before the deletion begins. The alert includes the name of the directory, and space usage for the subdirectories. In particular, the deletion policy is as follows:
Files in the following directories are deleted using a policy based on the file modification time stamp.
/opt/oracle/dbserver/log
/opt/oracle/dbserver/dbms/deploy/config/metrics
/opt/oracle/dbserver/dbms/deploy/log
Files that are older than the number of days set by the metricHistoryDays
attribute are deleted first, then successive deletions occur for earlier files, down to files with modification time stamps older than or equal to 10 minutes, or until file system utilization is less than 75 percent. The metricHistoryDays
attribute applies to files in /opt/oracle/dbserver/dbms/deploy/config/metrics
. For the other log and trace files, use the diagHistoryDays
attribute.
Starting with Oracle Exadata System Software release 12.1.2.2.0, the maximum amount of space for ms-odl.trc
and ms-odl.log
files is 100 MB (twenty 5 MB files) for *.trc
files and 100 MB (twenty 5 MB files) for *.log
files. Previously, it was 50 MB (ten 5 MB files) for both *.trc
and *.log
files.
ms-odl
generation files are renamed when they reach 5 MB, and the oldest are deleted when the files use up 100 MB of space.
2.3 Maintaining Flash Disks on Exadata Database Servers
Flash disks should be monitored and replaced when necessary.
Starting with Exadata Database Machine X7-8, the database servers contain flash devices instead of hard disks. These flash devices can be replaced without shutting down the server.
- Monitoring the Status of Flash Disks
You can monitor the status of a flash disk on the Exadata Database Machine by checking its attributes with the DBMCLILIST PHYSICALDISK
command. - Performing a Hot-Pluggable Replacement of a Flash Disk
For Oracle Exadata X7-8 and X8-8 models, the database server uses hot-pluggable flash disks instead of hard disk drives.
Parent topic: Maintaining Exadata Database Servers
2.3.1 Monitoring the Status of Flash Disks
You can monitor the status of a flash disk on the Exadata Database Machine by checking its attributes with the DBMCLI LIST PHYSICALDISK
command.
For example, a flash disk status equal to failed
is probably having problems and needs to be replaced.
The Exadata Database Server flash disk statuses are as follows:
-
normal
-
normal - dropped for replacement
-
failed
-
failed - dropped for replacement
-
failed - rejected due to incorrect disk model
-
failed - rejected due to incorrect disk model - dropped for replacement
-
failed - rejected due to wrong slot
-
failed - rejected due to wrong slot - dropped for replacement
-
warning - peer failure
-
warning - predictive failure, write-through caching
-
warning - predictive failure
-
warning - predictive failure - dropped for replacement
-
warning - write-through caching
Parent topic: Maintaining Flash Disks on Exadata Database Servers
2.3.2 Performing a Hot-Pluggable Replacement of a Flash Disk
For Oracle Exadata X7-8 and X8-8 models, the database server uses hot-pluggable flash disks instead of hard disk drives.
Parent topic: Maintaining Flash Disks on Exadata Database Servers
2.4 Adding the Disk Expansion Kit to Database Servers
You can add local storage space to an Oracle Exadata Database Server by using a disk expansion kit.
- Adding the Disk Expansion Kit to Database Servers: X9M-2
- Adding the Disk Expansion Kit to Database Servers: X8M-2 and Prior
Parent topic: Maintaining Exadata Database Servers
2.4.1 Adding the Disk Expansion Kit to Database Servers: X9M-2
This procedure describes adding the disk expansion kit to Oracle Exadata Database Machine X9M-2 database servers.
Before starting, the server must be powered on so that the disk controller can sense the addition of the new drives.
To add the disk expansion kit:
Related Topics
Parent topic: Adding the Disk Expansion Kit to Database Servers
2.4.2 Adding the Disk Expansion Kit to Database Servers: X8M-2 and Prior
Note the following restrictions and requirements:
-
The disk expansion kit is supported only on 2-socket systems starting with Oracle Exadata Database Machine X5-2.
-
Oracle Exadata System Software release 12.1.2.3.0 or later is required.
-
For systems running Oracle Linux 6 (OL6), a reboot is required for the Linux kernel to recognize a newly added disk partition.
-
If you are adding the disk expansion kit to an Oracle Exadata Database Machine X7-2 system, and you are using an Oracle Exadata System Software release before 18.1.11, then ensure that the following symbolic link is present on the database server before proceeding:
# ls -l /opt/MegaRAID/MegaCli/MegaCli64 lrwxrwxrwx 1 root root 31 Jun 4 03:40 /opt/MegaRAID/MegaCli/MegaCli64 -> /opt/MegaRAID/storcli/storcli64
If the symbolic link is not present, then use the following commands to create it:
# mkdir -p /opt/MegaRAID/MegaCli # ln -s /opt/MegaRAID/storcli/storcli64 /opt/MegaRAID/MegaCli/MegaCli64
To add the disk expansion kit to an Oracle Exadata Database Server:
Related Topics
Parent topic: Adding the Disk Expansion Kit to Database Servers
2.5 Adding Memory Expansion Kit to Database Servers
Additional memory can be added to database servers. The following procedure describes how to add the memory:
Additional notes:
-
When adding memory to Oracle Exadata Database Machines running Oracle Linux, Oracle recommends updating the
/etc/security/limits.conf
file with the following:oracle soft memlock 75% oracle hard memlock 75%
-
Sun Fire X4170 M2 Oracle Database Servers ship from the factory with 96 GB of memory, with 12 of the 18 DIMM slots populated with 8 GB DIMMs. The optional X2-2 Memory Expansion Kit can be used to populate the remaining 6 empty slots with 16 GB DIMMs to bring the total memory to 192 GB (12 x 8 GB and 6 x 16GB).
The memory expansion kit is primarily for consolidation workloads where many databases are run on each database server. In this scenario, the CPU usage is often low while the memory usage is very high.
However, there is a downside to populating all the memory slots as the frequency of the memory DIMMs drop to 800 MHz from 1333 MHz. The performance effect of the slower memory appears as increased CPU utilization. The average measured increase in CPU utilization is typically between 5% and 10%. The increase varies greatly by workload. In test workloads, several workloads had almost zero increase, while one workload had as high as a 20% increase.
Parent topic: Maintaining Exadata Database Servers
2.6 Verifying and Modifying the Link Speed on the Client Network Ports for X7 and Later Systems
You can configure 10 GbE connections or 25 GbE connections on the client network on Oracle Exadata X7 and later database servers.
Note:
You should configure the client network ports using Oracle Exadata Deployment Assistant (OEDA) during system deployment. See Using Oracle Exadata Deployment Assistant.The following steps may be necessary to configure a client access port if the OEDA deployment was not performed or was performed incorrectly. You can also use these steps to change the client network from 10 GbE to 25 GbE, or from 25 GbE to 10 GbE.
Parent topic: Maintaining Exadata Database Servers
2.7 Adding and Configuring an Extra Network Card on Oracle Exadata X6-2 and Later
You can add an additional network card on Oracle Exadata X6-2 and later systems.
Prerequisites
Ensure you are using the correct link speed for Oracle Exadata X7-2 and later compute nodes. Complete the steps in Verifying and Modifying the Link Speed on the Client Network Ports for X7 and Later Systems.
Oracle Exadata X6-2
Oracle Exadata X6-2 database server offers highly available copper 10G network on the motherboard, and an optical 10G network via a PCI card on slot 2. Oracle offers an additional Ethernet card for customers that require additional connectivity. The additional card provides either dual port 10GE copper connectivity (part number 7100488) or dual port 10GE optical connectivity (part number X1109A-Z). You install this card in PCIe slot 1 on the Oracle Exadata X6-2 database server.
After you install the card and connect it to the network, the Oracle Exadata System Software automatically recognizes the new card and
configures the two ports as eth6
and eth7
interfaces on the X6-2 database server. You can
use these additional ports to provide an additional client network, or to create a
separate backup or data recovery network. On a database server that runs virtual
machines, you could use this to isolate traffic from two virtual machines.
Oracle Exadata X7-2
Oracle Exadata X7-2 and later database servers offer 2 copper (RJ45) or 2 optical (SFP28) network connections on the motherboard plus 2 optical (SFP28) network connections in PCIe card slot 1. Oracle offers an additional 4 copper (RJ45) 10G network connections for customers that require additional connectivity. The additional card is the Oracle Quad Port 10 GBase-T card (part number 7111181). You install this card in PCIe slot 3 on the database server.
After you install the card and connect it to the network, the Oracle Exadata System Software automatically recognizes the new card and
configures the four ports as eth5
to eth8
interfaces on the database server. You can use these additional ports to provide an
additional client network, or to create a separate backup or data recovery networks.
On a database server that runs virtual machines, you could use this additional
client network to isolate traffic from two virtual machines.
After you add a card to a database server, you need to configure the card. See the following topics for instructions:
Oracle Exadata X8-2 and X8M-2
Oracle Exadata X8-2 and X8M-2 database servers offer 2 copper (RJ45) or 2 copper/optical (SFP28) network connections on the motherboard plus 2 optical (SFP28) network connections in PCIe card slot 1. Oracle offers an additional 4 copper 1/10G (RJ45) or 2 optical 10/25G (SFP28) network connections for customers that require additional connectivity. The additional cards are:
- Quad-Port 10 GBase-T card (part number 7111181)
- Dual-Port 25 Gb Ethernet adapter (part number 7118016)
The additional card is installed in PCIe slot 3 on the database server.
After you install the card and connect it to the network, the Oracle Exadata System Software automatically recognizes the new card and
configures either the four ports as eth5
to eth8
interfaces for the quad port card, or eth5
and
eth6
for the dual port card on the database server. You can use
these additional ports to provide an additional client network, or to create a
separate backup or data recovery networks. On a database server that runs virtual
machines, you could use this additional client network to isolate traffic from two
virtual machines.
After you add a card to a database server, you need to configure the card. See the following topics for instructions:
Oracle Exadata X9M-2
Oracle Exadata X9M-2 database servers offer a variety of flexible network configurations using one, two, or three network interface cards. Each card can be one of the following:
- Quad-Port 10 GBase-T card (RJ45) (part number 7111181)
- Dual-Port 25 Gb Ethernet adapter (SFP28) (part number 7118016)
After initial deployment, you can add network interface cards up to a maximum of three network interface cards on each database server.
On non-Eighth Rack systems only, you can optionally install one Dual-Port 100 Gb Ethernet adapter (QSFP28) (part number 7603661) in PCIe slot 2.
After you install the card and connect it to the network, the Oracle Exadata System Software automatically recognizes the new card and configures the physical ports as follows:
- Quad-Port 10 GBase-T card in PCIe slot 1:
eth1
,eth2
,eth3
, andeth4
- Dual-Port 25 Gb Ethernet adapter in PCIe slot 1:
eth1
andeth2
- Quad-Port 10 GBase-T card in PCIe slot 2:
eth5
,eth6
,eth7
, andeth8
- Dual-Port 25 Gb Ethernet adapter in PCIe slot 2:
eth5
andeth6
- Dual-Port 100 Gb Ethernet adapter in PCIe slot 2:
eth5
andeth6
- Quad-Port 10 GBase-T card in PCIe slot 3:
eth9
,eth10
,eth11
, andeth12
- Dual-Port 25 Gb Ethernet adapter in PCIe slot 3:
eth9
andeth10
You can use the network ports to provide multiple client networks, or to create separate dedicated networks for backup/recovery and bulk data transfer. On a database server that runs virtual machines (VMs), you can use multiple client networks to isolate traffic for each VM cluster.
After you add a card to a database server, you need to configure the card. See the following topics for instructions:
- Viewing the Network Interfaces
To view the network interfaces, you can run theipconf.pl
command. - Configuring the Additional Network Card for a Non-Oracle VM Environment
You can configure the additional network card on an Oracle Exadata X6-2 or later database server for a non-Oracle VM environment. - Configuring the Additional Network Card for an Oracle VM Environment
You can configure the additional network card on an Oracle Exadata X6-2 and later database server for an Oracle VM environment.
Parent topic: Maintaining Exadata Database Servers
2.7.1 Viewing the Network Interfaces
To view the network interfaces, you can run the ipconf.pl
command.
Example 2-2 Viewing the default network interfaces for an Oracle Exadata X8M-2 database server
The following example shows the output for an Oracle Exadata X8M-2 database server without the additional network card. In addition to the RDMA Network Fabric interfaces, the output shows the interfaces for three network cards:
- A single port 1/10Gb card,
eth0
- A dual port 10 or 25Gb card, on eth1 and eth2
- A dual port 10 or 25Gb card, on eth3 and eth4
root@scaz23adm01 ibdiagtools]# /opt/oracle.cellos/ipconf.pl
[Info]: ipconf command line: /opt/oracle.cellos/ipconf.pl
Logging started to /var/log/cellos/ipconf.log
Interface re0 is Linked. hca: mlx5_0
Interface re1 is Linked. hca: mlx5_0
Interface eth0 is Linked. driver/mac: igb/00:10:e0:c3:b7:9c
Interface eth1 is Unlinked. driver/mac: bnxt_en/00:10:e0:c3:b7:9d (slave of bondeth0)
Interface eth2 is Linked. driver/mac: bnxt_en/00:10:e0:c3:b7:9d (slave of bondeth0)
Interface eth3 is Unlinked. driver/mac: bnxt_en/00:0a:f7:c3:28:30
Interface eth4 is Unlinked. driver/mac: bnxt_en/00:0a:f7:c3:28:38
Example 2-3 Viewing the default network interfaces for an Oracle Exadata X7-2 or X8-2 database server
The following example shows the output for an Oracle Exadata X7-2 or X8-2 database server without the additional network card. In addition to the RDMA Network Fabric interfaces, the output shows the interfaces for three network cards:
- A single port 10Gb card, on eth0
- A dual port 10 or 25Gb card, on eth1 and eth2
- A dual port 25Gb card, on eth3 and eth4
# /opt/oracle.cellos/ipconf.pl
Logging started to /var/log/cellos/ipconf.log
Interface ib0 is Linked. hca: mlx4_0
Interface ib1 is Linked. hca: mlx4_0
Interface eth0 is Linked. driver/mac: igb/00:
10:e0:c3:ba:72
Interface eth1 is Linked. driver/mac: bnxt_en
/00:10:e0:c3:ba:73
Interface eth2 is Linked. driver/mac: bnxt_en
/00:10:e0:c3:ba:74
Interface eth3 is Linked. driver/mac: bnxt_en
/00:0a:f7:c3:14:a0 (slave of bondeth0)
Interface eth4 is Linked. driver/mac: bnxt_en
/00:0a:f7:c3:14:a0 (slave of bondeth0)
Example 2-4 Viewing the default network interfaces for an Oracle Exadata X6-2 database server
The following example shows the output for an Oracle Exadata X6-2 database server without the additional network card. In addition to the RDMA Network Fabric interfaces, the output shows the interfaces for two network cards:
- A quad port 10Gb card, on eth0 to eth3
- A dual port 10Gb card, on eth4 and eth5
# cd /opt/oracle.cellos/
# ./ipconf.pl
Logging started to /var/log/cellos/ipconf.log
Interface ib0 is Linked. hca: mlx4_0
Interface ib1 is Linked. hca: mlx4_0
Interface eth0 is Linked. driver/mac: ixgbe/00:10:e0:8b:24:b6
Interface eth1 is ..... Linked. driver/mac: ixgbe/00:10:e0:8b:24:b7
Interface eth2 is ..... Linked. driver/mac: ixgbe/00:10:e0:8b:24:b8
Interface eth3 is ..... Linked. driver/mac: ixgbe/00:10:e0:8b:24:b9
Interface eth4 is Linked. driver/mac: ixgbe/90:e2:ba:ac:20:ec (slave of bondeth0)
Interface eth5 is Linked. driver/mac: ixgbe/90:e2:ba:ac:20:ec (slave of bondeth0)
2.7.2 Configuring the Additional Network Card for a Non-Oracle VM Environment
You can configure the additional network card on an Oracle Exadata X6-2 or later database server for a non-Oracle VM environment.
This procedure assumes that you have already installed the network card in the Oracle Exadata database server but have not yet completed the configuration with Oracle Exadata Deployment Assistant (OEDA).
WARNING:
If you have already installed Oracle Grid Infrastructure on Oracle Exadata, then refer to the Oracle Clusterware documentation. Use caution when changing the network interfaces for the cluster.2.7.3 Configuring the Additional Network Card for an Oracle VM Environment
You can configure the additional network card on an Oracle Exadata X6-2 and later database server for an Oracle VM environment.
This procedure assumes that you have already installed the network card in the Oracle Exadata database server but have not yet completed the configuration with Oracle Exadata Deployment Assistant (OEDA).
Caution:
Do not attempt this procedure if you have already installed Oracle Grid Infrastructure on Oracle Exadata.2.8 Increasing the Number of Active Cores on Database Servers
You can increase the number of active cores on Oracle Exadata using capacity-on-demand.
The number of active cores on the database servers on Oracle Exadata Database Machine X4-2 and newer systems can be reduced during installation. The number of active cores can be increased when additional capacity is needed. This is known as capacity-on-demand.
Additional cores are increased in increments of 2 cores on 2-socket systems, and in increments of 8 cores on 8-socket systems. The following table lists the capacity-on-demand core processor configurations.
Table 2-3 Capacity-on-Demand Core Processor Configurations
Oracle Exadata | Eligible Systems | Minimum Cores per Server | Maximum Cores per Server | Core Increments |
---|---|---|---|---|
Oracle Exadata X9M-2 |
Any configuration except Eighth Rack |
14 | 64 |
From 14 to 64, in increments of 2: 14, 16, 18, …, 62, 64 |
Oracle Exadata X9M-2 |
Eighth rack |
8 | 32 |
From 8 to 32, in increments of 2: 8, 10, 12, …, 30, 32 |
Oracle Exadata X7-2, X8-2, and X8M-2 |
Any configuration except Eighth Rack |
14 |
48 |
From 14 to 48, in increments of 2: 14, 16, 18, …, 46, 48 |
Oracle Exadata X7-2, X8-2, and X8M-2 |
Eighth rack |
8 |
24 |
From 8 to 24, in increments of 2: 8, 10, 12, …, 22, 24 |
Oracle Exadata Database Machine X6-2 |
Any configuration except Eighth Rack |
14 |
44 |
From 14 to 44, in increments of 2: 14, 16, 18, …, 42, 44 |
Oracle Exadata Database Machine X6-2 |
Eighth rack |
8 |
22 |
From 8 to 22, in increments of 2: 8, 10, 12, …, 20, 22 |
Oracle Exadata Database Machine X5-2 |
Any configuration except Eighth Rack |
14 |
36 |
From 14 to 36, in increments of 2: 14, 16, 18, …, 34, 36 |
Oracle Exadata Database Machine X5-2 |
Eighth rack |
8 |
18 |
From 8 to 18, in increments of 2: 8, 10, 12, 14, 16, 18 |
Oracle Exadata Database Machine X4-2 |
Full rack Half rack Quarter rack |
12 |
24 |
From 12 to 24, in increments of 2: 12, 14, 16, 18, 20, 22, 24 |
Oracle Exadata X7-8, X8-8, X8M-8, and X9M-8 |
Any configuration |
56 |
192 |
From 56 to 192, in increments of 8: 56, 64, 72, …, 184, 192 |
Oracle Exadata X6-8 and X5-8 |
Any configuration |
56 |
144 |
From 56 to 144, in increments of 8: 56, 64, 72, …, 136, 144 |
Oracle Exadata Database Machine X4-8 |
Full rack |
48 |
120 |
From 48 to 120, in increments of 8: 48, 56, 64, …, 112, 120 |
Note:
Oracle recommends licensing the same number of cores on each server, in case of failover.
Database servers can be added one at a time, and capacity-on-demand can be applied to the individual database servers. This option includes Oracle Exadata Database Machine X5-2 Eighth Racks.
The database server must be restarted after enabling additional cores. If the database servers are part of a cluster, then they can be enabled in a rolling fashion.
Parent topic: Maintaining Exadata Database Servers
2.9 Extending LVM Partitions
Logical Volume Manager (LVM) provides flexibility to reorganize the partitions in the database servers.
Note:
-
Keep at least 1 GB of free space in the
VGExaDb
volume group. This space is used for the LVM snapshot created by thedbnodeupdate.sh
utility during software maintenance. -
If you make snapshot-based backups of the
/
(root) and/u01
directories by following the steps in Creating a Snapshot-Based Backup of Oracle Linux Database Server, then keep at least 6 GB of free space in theVGExaDb
volume group.
This section contains the following topics:
- Extending the root LVM Partition
The procedure for extending the root LVM partition depends on your Oracle Exadata System Software release. - Resizing a Non-root LVM Partition
The procedure for resizing a non-root LVM partition depends on your Oracle Exadata System Software release. - Extending the Swap Partition
This procedure describes how to extend the size of the swap (/swap
) partition.
Parent topic: Maintaining Exadata Database Servers
2.9.1 Extending the root LVM Partition
The procedure for extending the root LVM partition depends on your Oracle Exadata System Software release.
- Extending the root LVM Partition on Systems Running Oracle Exadata System Software Release 11.2.3.2.1 or Later
- Extending the root LVM Partition on Systems Running Oracle Exadata System Software Earlier than Release 11.2.3.2.1
You can extend the size of the root (/
) partition on systems running Oracle Exadata System Software earlier than release 11.2.3.2.1 using this procedure.
Parent topic: Extending LVM Partitions
2.9.1.1 Extending the root LVM Partition on Systems Running Oracle Exadata System Software Release 11.2.3.2.1 or Later
The following procedure describes how to extend the size of the root (/
) partition on systems running Oracle Exadata System Software release 11.2.3.2.1 or later:
Note:
-
This procedure does not require an outage on the server.
-
For management domain systems, the active and inactive Sys LVM's are
LVDbSys2
andLVDbSys3
instead ofLVDbSys1
andLVDbSys2
. -
Make sure that
LVDbSys1
andLVDbSys2
are sized the same.
Parent topic: Extending the root LVM Partition
2.9.1.2 Extending the root LVM Partition on Systems Running Oracle Exadata System Software Earlier than Release 11.2.3.2.1
You can extend the size of the root (/
) partition on systems running Oracle Exadata System Software earlier than release 11.2.3.2.1 using this procedure.
Note:
-
This procedure requires the system to be offline and restarted.
-
Keep at least 1 GB of free space in the
VGExaDb
volume group to be used for the LVM snapshot created by thedbnodeupdate.sh
utility during software maintenance. If you make snapshot-based backups of the/
(root) and/u01
directories by following the steps in Creating a Snapshot-Based Backup of Oracle Linux Database Server, then keep at least 6 GB of free space in theVGExaDb
volume group. -
For management domain systems, active and inactive Sys LVM's are
LVDbSys2
andLVDbSys3
instead ofLVDbSys1
andLVDbSys2
. -
Make sure
LVDbSys1
andLVDbSys2
are sized the same.
Parent topic: Extending the root LVM Partition
2.9.2 Resizing a Non-root LVM Partition
The procedure for resizing a non-root LVM partition depends on your Oracle Exadata System Software release.
- Extending a Non-root LVM Partition on Systems Running Oracle Exadata System Software Release 11.2.3.2.1 or Later
This procedure describes how to extend the size of a non-root (/u01
) partition on systems running Oracle Exadata System Software release 11.2.3.2.1 or later. - Extending a Non-root LVM Partition on Systems Running Oracle Exadata System Software Earlier than Release 11.2.3.2.1
This procedure describes how to extend the size of a non-root (/u01
) partition on systems running Oracle Exadata System Software earlier than release 11.2.3.2.1. - Reducing a Non-root LVM Partition on Systems Running Oracle Exadata System Software Release 11.2.3.2.1 or Later
You can reduce the size of a non-root (/u01
) partition on systems running Oracle Exadata System Software release 11.2.3.2.1 or later.
Parent topic: Extending LVM Partitions
2.9.2.1 Extending a Non-root LVM Partition on Systems Running Oracle Exadata System Software Release 11.2.3.2.1 or Later
This procedure describes how to extend the size of a non-root (/u01
) partition on systems running Oracle Exadata System Software release 11.2.3.2.1 or later.
This procedure does not require an outage on the server.
Parent topic: Resizing a Non-root LVM Partition
2.9.2.2 Extending a Non-root LVM Partition on Systems Running Oracle Exadata System Software Earlier than Release 11.2.3.2.1
This procedure describes how to extend the size of a non-root (/u01
) partition on systems running Oracle Exadata System Software earlier than release 11.2.3.2.1.
In this procedure, /dev/VGExaDb/LVDbOra1
is mounted at /u01
.
Note:
-
Keep at least 1 GB of free space in the
VGExaDb
volume group. This space is used for the LVM snapshot created by thedbnodeupdate.sh
utility during software maintenance. -
If you make snapshot-based backups of the
/
(root) and/u01
directories by following the steps in Creating a Snapshot-Based Backup of Oracle Linux Database Server, then keep at least 6 GB of free space in theVGExaDb
volume group.
2.9.2.3 Reducing a Non-root LVM Partition on Systems Running Oracle Exadata System Software Release 11.2.3.2.1 or Later
You can reduce the size of a non-root (/u01
) partition on systems running Oracle Exadata System Software release 11.2.3.2.1 or later.
Note:
-
This procedure does not require an outage on the server.
-
It is recommended that you back up your file system before performing this procedure.
Parent topic: Resizing a Non-root LVM Partition
2.9.3 Extending the Swap Partition
This procedure describes how to extend the size of the swap (/swap
) partition.
Note:
This procedure requires the system to be offline and restarted.
Keep at least 1 GB of free space in the VGExaDb
volume group to be used for the Logical Volume Manager (LVM) snapshot created by the dbnodeupdate.sh
utility during software maintenance. If you make snapshot-based backups of the /
(root) and /u01
directories by following the steps in "Creating a Snapshot-Based Backup of Oracle Linux Database Server", then keep at least 6 GB of free space in the VGExaDb
volume group.
Parent topic: Extending LVM Partitions
2.10 Creating a Snapshot-Based Backup of Oracle Linux Database Server
A backup should be made before and after every significant change to the software on the database server. For example, a backup should be made before and after the following procedures:
- Application of operating system patches
- Application of Oracle patches
- Reconfiguration of significant operating parameters
- Installation or reconfiguration of significant non Oracle software
Starting with Oracle Exadata System Software release 19.1.0, the SSHD ClientAliveInterval
defaults to 600 seconds. If the time needed for completing backups exceeds 10 minutes, then you can specify a larger value for ClientAliveInterval
in the /etc/ssh/sshd_config
file. You must restart the SSH service for changes to take effect. After the long running operation completes, remove the modification to the ClientAliveInterval
parameter and restart the SSH service.
This section contains the following topics:
- Creating a Snapshot-Based Backup of Exadata Database Servers X8M and Later with Uncustomized Partitions
This procedure describes how to take a snapshot-based backup of an Oracle Exadata X8M and later database server with uncustomized storage partitions. - Creating a Snapshot-Based Backup of Exadata X8 or Earlier Database Servers with Uncustomized Partitions
This procedure describes how to take a snapshot-based backup. The values shown in the procedure are examples. - Creating a Snapshot-Based Backup of Oracle Linux Database Server with Customized Partitions
Parent topic: Maintaining Exadata Database Servers
2.10.1 Creating a Snapshot-Based Backup of Exadata Database Servers X8M and Later with Uncustomized Partitions
This procedure describes how to take a snapshot-based backup of an Oracle Exadata X8M and later database server with uncustomized storage partitions.
Starting with Oracle Exadata X8M and Oracle Exadata System Software release 19.3, the database servers use the following storage partitions:
File System Mount Point | Logical Volume Name |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Note:
- This procedure relies on the exact storage partitions that are originally shipped on the database server. If you modified the storage partitions in any way, then you cannot use this procedure and the associated recovery procedure without modification. Modifications include changing partition sizes, renaming partitions, adding partitions, or removing partitions.
- All steps must be performed as the
root
user.
2.10.2 Creating a Snapshot-Based Backup of Exadata X8 or Earlier Database Servers with Uncustomized Partitions
This procedure describes how to take a snapshot-based backup. The values shown in the procedure are examples.
If you have not customized the database server partitions from their original shipped configuration, then use the procedures in this section to take a backup and use the backup to restore the database server using the backup.
Note:
-
The recovery procedure restores the exact partitions, including the name and sizes, as they were originally shipped. If you modified the partitions in any way, then you cannot use this procedure. Modifications include changing sizes, renaming, addition or removal of partitions.
-
All steps must be performed as the
root
user.
2.10.3 Creating a Snapshot-Based Backup of Oracle Linux Database Server with Customized Partitions
When you have customized the partitions, the backup procedure is generally the same as the procedure used for non-customized database servers, with the following alterations:
-
You must add the commands to back up any additional partitions. Throughout the procedure, use the command relating to the
/u01
partition as a template, and modify the arguments to suit. -
If any partitions are altered, then use the modified attributes in your commands. For example, if
/u01
is renamed to/myown_u01
, then use/myown_u01
in the commands.
Related Topics
2.11 Recovering Oracle Linux Database Servers Using a Snapshot-Based Backup
You can recover a database server file systems running Oracle Linux using a snapshot-based backup after severe disaster conditions happen for the database server, or when the server hardware is replaced to such an extent that it amounts to new hardware.
For example, replacing all hard disks leaves no trace of original software on the system. This is similar to replacing the complete system as far as the software is concerned. In addition, it provides a method for disaster recovery of the database servers using an LVM snapshot-based backup taken when the database server was healthy before the disaster condition.
The recovery procedures described in this section do not include backup or recovery of storage servers or the data within the Oracle databases. Oracle recommends testing the backup and recovery procedures on a regular basis.
- Overview of Snapshot-Based Recovery of Database Servers
The recovery process consists of a series of tasks. - Recovering Oracle Linux Database Server with Uncustomized Partitions
You can recover the Oracle Linux database server from a snapshot-based backup when using uncustomized partitions. - Recovering Exadata X9M Database Servers with Customized Partitions
This procedure describes how to recover an Oracle Exadata X9M database server with RoCE Network Fabric from a snapshot-based backup when using customized partitions. - Recovering Exadata X8M Database Servers with Customized Partitions
This procedure describes how to recover an Oracle Exadata X8M database server with RoCE Network Fabric from a snapshot-based backup when using customized partitions. - Recovering Exadata Database Servers X7 or X8 with Customized Partitions
This procedure describes how to recover an Oracle Exadata X7 or X8 Oracle Linux database server with InfiniBand Network Fabric from a snapshot-based backup when using customized partitions. - Recovering Exadata X6 or Earlier Database Servers with Customized Partitions
This procedure describes how to recover Oracle Exadata Database Servers for Oracle Exadata X6-2 or earlier running Oracle Linux from a snapshot-based backup when using customized partitions. - Configuring Oracle Exadata Database Machine Eighth Rack Oracle Linux Database Server After Recovery
Parent topic: Maintaining Exadata Database Servers
2.11.1 Overview of Snapshot-Based Recovery of Database Servers
The recovery process consists of a series of tasks.
The recovery procedures use the diagnostics.iso
image as a virtual CD-ROM to restart the database server in rescue mode using the ILOM.
Note:
Restoring files from tape may require additional drives to be loaded, and is not covered in this chapter. Oracle recommends backing up files to an NFS location, and using existing tape options to back up and recover from the NFS host.The general work flow includes the following tasks:
-
Recreate the following:
- Boot partitions
- Physical volumes
- Volume groups
- Logical volumes
- File system
- Swap partition
- Activate the swap partition.
- Ensure the
/boot
partition is the active boot partition. - Restore the data.
- Reconfigure GRUB.
- Restart the server.
If you use quorum disks, then after recovering the database servers from backup, you must manually reconfigure the quorum disk for the recovered server. See Reconfigure Quorum Disk After Restoring a Database Server for more information.
2.11.2 Recovering Oracle Linux Database Server with Uncustomized Partitions
You can recover the Oracle Linux database server from a snapshot-based backup when using uncustomized partitions.
This procedure is applicable when the layout of the partitions, logical volumes, file systems, and their sizes are equal to the layout when the database server was initially deployed.
Caution:
All existing data on the disks is lost during the procedure.2.11.3 Recovering Exadata X9M Database Servers with Customized Partitions
This procedure describes how to recover an Oracle Exadata X9M database server with RoCE Network Fabric from a snapshot-based backup when using customized partitions.
2.11.4 Recovering Exadata X8M Database Servers with Customized Partitions
This procedure describes how to recover an Oracle Exadata X8M database server with RoCE Network Fabric from a snapshot-based backup when using customized partitions.
2.11.5 Recovering Exadata Database Servers X7 or X8 with Customized Partitions
This procedure describes how to recover an Oracle Exadata X7 or X8 Oracle Linux database server with InfiniBand Network Fabric from a snapshot-based backup when using customized partitions.
Note:
This task assumes you are running Oracle Exadata System Software release 18c (18.1.0) or greater.2.11.6 Recovering Exadata X6 or Earlier Database Servers with Customized Partitions
This procedure describes how to recover Oracle Exadata Database Servers for Oracle Exadata X6-2 or earlier running Oracle Linux from a snapshot-based backup when using customized partitions.
2.11.7 Configuring Oracle Exadata Database Machine Eighth Rack Oracle Linux Database Server After Recovery
After the Oracle Linux database server in Oracle Exadata Database Machine Eighth Rack has been re-imaged, restored, or rescued, you can then reconfigure the eighth rack.
2.11.7.1 Configuring Eighth Rack On X3-2 or Later Machines Running Oracle Exadata Storage Server Release 12.1.2.3.0 or Later
The following procedure should be performed after Oracle Linux database server in Oracle Exadata Database Machine Eighth Rack has been re-imaged, restored, or rescued.
For X3-2 systems, use this method only if you are running Oracle Exadata System Software release 12.1.2.3.0 or later.
2.12 Re-Imaging the Oracle Exadata Database Server
The re-image procedure is necessary when a database server needs to be brought to an initial state for various reasons.
Some examples scenarios for re-imaging the database server are:
- You want to install a new server and need to use an earlier release than is in the image already installed on the server.
- You need to replace a damaged database server with a new database server.
- Your database server had multiple disk failures causing local disk storage failure and you do not have a database server backup.
- You want to repurpose the server to a new rack.
During the re-imaging procedure, the other database servers on Oracle Exadata are available. When the new server is added to the cluster, the software is copied from an existing database server to the new server. It is your responsibility to restore scripting, CRON jobs, maintenance actions, and non-Oracle software.
Note:
The procedures in this section assume the database is Oracle Database 11g Release 2 (11.2) or later.Starting with Oracle Exadata System Software release 19.1.0, Secure Eraser is automatically started during re-imaging if the hardware supports Secure Eraser. This significantly simplifies the re-imaging procedure while maintaining performance. Now, when re-purposing a rack, you only have to image the rack and the secure data erasure is taken care of transparently as part of the process.
The following tasks describes how to re-image an Oracle Exadata Database Server running Oracle Linux:
- Contact Oracle Support Services
If a failed server is being replaced, open a support request with Oracle Support Services. - Download Latest Release of Cluster Verification Utility
The latest release of the cluster verification utility (cluvfy
) is available from My Oracle Support. - Remove the Database Server from the Cluster
If you are reimaging a failed server or repurposing a server, follow the steps in this task to remove the server from the cluster before you reimage it. If you are reimaging the server for a different reason, skip this task and proceed with the reimaging task next. - Image the Database Server
After the database server has been installed or replaced, you can image the new database server. - Configure the Re-imaged Database Server
The re-imaged database server does not have any host names, IP addresses, DNS or NTP settings. The steps in this task describe how to configure the re-imaged database server. - Prepare the Re-imaged Database Server for the Cluster
This task describes how to ensure the changes made during initial installation are done to the re-imaged, bare metal database server. - Apply Oracle Exadata System Software Patch Bundles to the Replacement Database Server
Oracle periodically releases Oracle Exadata System Software patch bundles for Oracle Exadata. - Clone Oracle Grid Infrastructure to the Replacement Database Server
This procedure describes how to clone Oracle Grid Infrastructure to the replacement database server. - Clone Oracle Database Homes to the Replacement Database Server
The following procedure describes how to clone the Oracle Database homes to the replacement server.
Parent topic: Maintaining Exadata Database Servers
2.12.1 Contact Oracle Support Services
If a failed server is being replaced, open a support request with Oracle Support Services.
The support engineer will identify the failed server, and send a replacement. The
support engineer will ask for the output from the imagehistory
command run from a running database server. The output provides a link to the
computeImageMaker
file that was used to image the original
database server, and provides a means to restore the system to the same level.
Parent topic: Re-Imaging the Oracle Exadata Database Server
2.12.2 Download Latest Release of Cluster Verification Utility
The latest release of the cluster verification utility (cluvfy
) is available from My Oracle Support.
See My Oracle Support note 316817.1 for download instructions and other information.
Parent topic: Re-Imaging the Oracle Exadata Database Server
2.12.3 Remove the Database Server from the Cluster
If you are reimaging a failed server or repurposing a server, follow the steps in this task to remove the server from the cluster before you reimage it. If you are reimaging the server for a different reason, skip this task and proceed with the reimaging task next.
The steps in this task are performed using a working database server in the cluster. In the following commands, working_server is a working database server, and failed_server is the database server you are removing, either because it failed or it is being repurposed.
Parent topic: Re-Imaging the Oracle Exadata Database Server
2.12.4 Image the Database Server
After the database server has been installed or replaced, you can image the new database server.
You can use installation media on a USB thumb drive, or a touchless option using PXE or ISO attached to the ILOM. See Imaging a New System in Oracle Exadata Database Machine Installation and Configuration Guide for the details.
Parent topic: Re-Imaging the Oracle Exadata Database Server
2.12.5 Configure the Re-imaged Database Server
The re-imaged database server does not have any host names, IP addresses, DNS or NTP settings. The steps in this task describe how to configure the re-imaged database server.
You need the following information prior to configuring the re-imaged database server:
- Name servers
- Time zone, such as Americas/Chicago
- NTP servers
- IP address information for the management network
- IP address information for the client access network
- IP address information for the RDMA Network Fabric
- Canonical host name
- Default gateway
The information should be the same on all database servers in Oracle Exadata. The IP addresses can be obtained from DNS. In addition, a document with the information should have been provided when Oracle Exadata was installed.
The following procedure describes how to configure the re-imaged database server:
- Power on the replacement database server. When the system boots, it automatically runs the Configure Oracle Exadata routine, and prompts for information.
- Enter the information when prompted, and confirm the settings. The start up process will continue.
Note:
-
If the database server does not use all network interfaces, then the configuration process stops, and warns that some network interfaces are disconnected. It prompts whether to retry the discovery process. Respond with
yes
orno
, as appropriate for the environment. -
If bonding is used for the client access network, then it is set in the default active-passive mode at this time.
Parent topic: Re-Imaging the Oracle Exadata Database Server
2.12.6 Prepare the Re-imaged Database Server for the Cluster
This task describes how to ensure the changes made during initial installation are done to the re-imaged, bare metal database server.
Note:
For Oracle VM systems, follow the procedure in Expanding an Oracle RAC Cluster on Oracle VM Using OEDACLI.Parent topic: Re-Imaging the Oracle Exadata Database Server
2.12.7 Apply Oracle Exadata System Software Patch Bundles to the Replacement Database Server
Oracle periodically releases Oracle Exadata System Software patch bundles for Oracle Exadata.
If a patch bundle has been applied to the working database servers that was later
than the release of the computeImageMaker
file, then the patch
bundle must be applied to the replacement Oracle Exadata Database Server. Determine if a patch bundle has been applied as
follows:
-
Prior to Oracle Exadata System Software release 11.2.1.2.3, the database servers did not maintain version history information. To determine the release number, log in to Oracle Exadata Storage Server, and run the following command:
imageinfo -ver
If the command shows a different release than the release used by the
computeImageMaker
file, then Oracle Exadata System Software patch has been applied to Oracle Exadata Database Machine and must be applied to the replacement Oracle Exadata Database Server. -
Starting with Oracle Exadata System Software release 11.2.1.2.3, the
imagehistory
command exists on the Oracle Exadata Database Server. Compare information on the replacement Oracle Exadata Database Server to information on a working Oracle Exadata Database Server. If the working database has a later release, then apply the Oracle Exadata Storage Server patch bundle to the replacement Oracle Exadata Database Server.
2.12.8 Clone Oracle Grid Infrastructure to the Replacement Database Server
This procedure describes how to clone Oracle Grid Infrastructure to the replacement database server.
In the following commands, working_server is a working database server, and replacement_server is the replacement database server. The commands in this procedure are run from a working database server as the Grid home owner. When the root
user is needed to run a command, it will be called out.
Parent topic: Re-Imaging the Oracle Exadata Database Server
2.12.9 Clone Oracle Database Homes to the Replacement Database Server
The following procedure describes how to clone the Oracle Database homes to the replacement server.
Run the commands from a working database server as the oracle
user. When the root
user is needed to run a command, it will be called out.
Parent topic: Re-Imaging the Oracle Exadata Database Server
2.13 Changing Existing Elastic Configurations for Database Servers
Elastic configurations provide a flexible and efficient mechanism to change the server configuration of your Oracle Exadata.
- Adding a New Database Server to the Cluster
You can add a new database server to an existing Oracle Real Application Clusters (Oracle RAC) cluster running on Oracle Exadata. - Moving an Existing Database Server to a Different Cluster
You can repurpose an existing database server and move it to a different cluster within the same Oracle Exadata Rack. - Dropping a Database Server from an Oracle RAC Cluster
You can remove a database server that is a member of an Oracle Real Application Clusters (Oracle RAC) cluster.
Related Topics
Parent topic: Maintaining Exadata Database Servers
2.13.1 Adding a New Database Server to the Cluster
You can add a new database server to an existing Oracle Real Application Clusters (Oracle RAC) cluster running on Oracle Exadata.
2.13.2 Moving an Existing Database Server to a Different Cluster
You can repurpose an existing database server and move it to a different cluster within the same Oracle Exadata Rack.
2.14 Managing Quorum Disks for High Redundancy Disk Groups
Quorum disks allow for the Oracle RAC voting files to be stored in a high redundancy disk group on an Oracle Exadata Rack with fewer than five storage servers due to the presence of two extra failure groups.
- Using Quorum Disks to Increase Fault Tolerance
Quorum disks are used to meet the minimum requirement of five failure groups for a high redundancy disk group on a system that does not have five storage servers. - Overview of Quorum Disk Manager
The Quorum Disk Manager utility, introduced in Oracle Exadata System Software release 12.1.2.3.0, helps you to manage the quorum disks. - Software Requirements for Quorum Disk Manager
You must satisfy the minimum software requirements to use the Quorum Disk Manager utility. - quorumdiskmgr Reference
- Add Quorum Disks to Database Nodes
You can add quorum disks to database nodes on an Oracle Exadata Rack with fewer than five storage servers that contains a high redundancy disk group. - Recreate Quorum Disks
In certain circumstances, you might need to recreate a quorum disk. - Use Cases
The following topics describe various configuration cases when using the quorum disk manager utility. - Reconfigure Quorum Disk After Restoring a Database Server
After restoring a database server,lvdisplay
shows the quorum disk was not restored.
Parent topic: Maintaining Exadata Database Servers
2.14.1 Using Quorum Disks to Increase Fault Tolerance
Quorum disks are used to meet the minimum requirement of five failure groups for a high redundancy disk group on a system that does not have five storage servers.
A failure group is a subset of the disks in a disk group, which could fail at the same time because they share hardware. Oracle recommends a minimum of three failure groups for normal redundancy disk groups and five failure groups for high redundancy disk groups to maintain the necessary number of copies of the Partner Status Table (PST) and to ensure robustness with respect to storage hardware failures. On Engineered Systems, these recommendations are enforced to ensure the highest availability of the system.
The PST contains status information about the Oracle Automatic Storage Management (Oracle ASM) disks in a disk group, such as the disk number, status (either online or offline), partner disk number, failure group info, and heartbeat info. To tolerate a single hardware failure, you need 3 total copies of the PST available to form a 2 of 3 majority. If there are two hardware failures, then you need a total for 5 copies of the PST so that after a double failure you still have a 3 of 5 majority.
A quorum failure group is a special type of failure group that does not contain user data. Quorum failure groups are used for storing the PST. A quorum failure group can also be used to store a copy of the voting file for Oracle Clusterware. Because disks in quorum failure groups do not contain user data, a quorum failure group is not considered when determining redundancy requirements in respect to storing user data.
In the event of a system failure, three failure groups in a normal redundancy disk group allow a comparison among three PSTs to accurately determine the most up to date and correct version of the PST, which could not be done with a comparison between only two PSTs. Similarly with a high redundancy disk group, if two failure groups are offline, then Oracle ASM would be able to make a comparison among the three remaining PSTs.
You can create quorum failure groups with Oracle Exadata Deployment Assistant (OEDA) when deploying Exadata or you can add them later using the Quorum Disk Manager Utility. The iSCSI quorum disks are created on database nodes and a voting file is created on those quorum disks. These additional quorum failure groups are used to meet the minimum requirement of five voting files and PSTs for a high redundancy disk group. Quorum disks are required when the following conditions exist:
- The Oracle Exadata Rack has fewer than five storage servers.
- The Oracle Exadata Rack has at least two database nodes.
- The Oracle Exadata Rack has at least one high redundancy disk group.
Quorum failure groups allow for a high redundancy disk group to exist on Oracle Exadata Racks with fewer than five storage servers by creating two extra failure groups. Without this feature, a disk group is vulnerable to a double partner storage server failure that results in the loss of the PST or voting file quorum, which can cause a complete cluster and database outage. Refer to My Oracle Support note 1339373.1 for how to restart the clusterware and databases in this scenario.
The iSCSI quorum disk implementation has high availability because the IP addresses on the RDMA Network Fabric are highly available using RDS.
Each iSCSI device shown in the figure below corresponds to a particular path to the iSCSI target. Each path corresponds to an RDMA Network Fabric port on the database node. For each multipath quorum disk device in an active–active system, there are two iSCSI devices, one for ib0
or re0
and the other for ib1
or re1
.
Figure 2-1 Multipath Device Connects to Both iSCSI Devices in an Active-Active System

Description of "Figure 2-1 Multipath Device Connects to Both iSCSI Devices in an Active-Active System"
Quorum disks can be used with bare metal Oracle Real Application Clusters (Oracle RAC) clusters and Oracle VM Oracle RAC clusters. For Oracle VM Oracle RAC clusters, the quorum disk devices reside in the Oracle RAC cluster nodes which are Oracle VM user domains as shown in the following figure.
Figure 2-2 Quorum Disk Devices on Oracle VM Oracle RAC Cluster

Description of "Figure 2-2 Quorum Disk Devices on Oracle VM Oracle RAC Cluster"
Note:
For pkey-enabled environments, the interfaces used for discovering the targets should be the pkey interfaces used for the Oracle Clusterware communication. These interfaces are listed using the following command:
Grid_home/bin/oifcfg getif | grep cluster_interconnect | awk '{print $1}'
2.14.2 Overview of Quorum Disk Manager
The Quorum Disk Manager utility, introduced in Oracle Exadata System Software release 12.1.2.3.0, helps you to manage the quorum disks.
This utility enables you to create an iSCSI quorum disk on two of the database nodes and store a voting file on those two quorum disks. These two additional voting files are used to meet the minimum requirement of five voting files for a high redundancy disk group.
The Quorum Disk Manager utility (quorumdiskmgr
) is used to create and manage all the necessary components including the iSCSI configuration, the iSCSI targets, the iSCSI LUNs, and the iSCSI devices for implementing quorum disks.
Related Topics
Parent topic: Managing Quorum Disks for High Redundancy Disk Groups
2.14.3 Software Requirements for Quorum Disk Manager
You must satisfy the minimum software requirements to use the Quorum Disk Manager utility.
To use this feature, the following releases are required:
-
Oracle Exadata System Software release 12.1.2.3.0 and above
-
Patch 23200778 for all Oracle Database homes
-
Oracle Grid Infrastructure release 12.1.0.2.160119 with patches 22722476 and 22682752, or Oracle Grid Infrastructure release 12.1.0.2.160419 and above
For new deployments, Oracle Exadata Deployment Assistant (OEDA) installs the patches automatically.
Parent topic: Managing Quorum Disks for High Redundancy Disk Groups
2.14.4 quorumdiskmgr Reference
The quorum disk manager utility (quorumdiskmgr
) runs on each database server to enable you to create and manage iSCSI quorum disks on database servers. You use quorumdiskmgr
to create, list, alter, and delete iSCSI quorum disks on database servers. The utility is installed on database servers when they are shipped.
- Syntax for the Quorum Disk Manager Utility
- quorumdiskmgr Objects
- Creating a Quorum Disk Configuration (--create --config)
The--create --config
action creates a quorum disk configuration. - Creating a Target (--create --target)
The--create --target
action creates a target that will be used to create the devices to add to the specified Oracle ASM disk group. - Creating a Device (--create --device)
The--create --device
action creates devices by discovering and logging into targets on database servers with an RDMA Network Fabric IP address in the specified list of IP addresses. - Listing Quorum Disk Configurations (--list --config)
The--list --config
action lists the quorum disk configurations. - Listing Targets (--list --target)
The--list --target
action lists the attributes of targets. - Listing Devices (--list --device)
The--list --device
action lists the attributes of devices, including device path, size, host name and ASM disk group name. - Deleting Configurations (--delete --config)
The--delete --config
action deletes quorum disk configurations. - Deleting Targets (--delete --target)
The--delete --target
action deletes the targets created for quorum disks on database servers. - Deleting Devices (--delete --device)
The--delete --device
command deletes quorum disk devices. - Changing Owner and Group Values (--alter --config)
The--alter --config
action changes the owner and group configurations. - Changing the RDMA Network Fabric IP Addresses (--alter --target)
The--alter --target
command changes the RDMA Network Fabric IP addresses of the database servers that have access to the local target created for the specified Oracle ASM disk group.
Parent topic: Managing Quorum Disks for High Redundancy Disk Groups
2.14.4.1 Syntax for the Quorum Disk Manager Utility
The quorum disk manager utility is a command-line tool. It has the following syntax:
quorumdiskmgr --verb --object [--options]
verb
is an action performed on an object. It is one of: alter
, create
, delete
, list
.
object
is an object on which the command performs an action.
options
extend the use of a command combination to include additional parameters for the command.
When using the quorumdiskmgr utility, the following rules apply:
-
Verbs, objects, and options are case-sensitive except where explicitly stated.
-
Use the double quote character around the value of an option that includes spaces or punctuation.
Parent topic: quorumdiskmgr Reference
2.14.4.2 quorumdiskmgr Objects
Object | Description |
---|---|
|
The quorum disk configurations include the owner and group of the ASM instance to which the iSCSI quorum disks will be added, and the list of network interfaces through which local and remote iSCSI quorum disks will be discovered. |
|
A target is an endpoint on each database server that waits for an iSCSI initiator to establish a session and provides required IO data transfer. |
|
A device is an iSCSI device created by logging into a local or remote target. |
Parent topic: quorumdiskmgr Reference
2.14.4.3 Creating a Quorum Disk Configuration (--create --config)
The --create --config
action creates a quorum disk configuration.
The configuration must be created before any targets or devices can be created.
Syntax
quorumdiskmgr --create --config [--owner owner --group group]
--network-iface-list network-iface-list
Parameters
The following table lists the parameters for the --create --config
action:
Parameter | Description |
---|---|
|
Specifies the owner of the Oracle ASM instance to which the iSCSI quorum disks will be added. This is an optional parameter. The default value is |
|
Specifies the group of the Oracle ASM instance to which the iSCSI quorum disks will be added. This is an optional parameter. The default value is |
|
Specifies the list of RDMA Network Fabric interface names through which the local and remote targets will be discovered. |
Example 2-5 Create a Quorum Disk Configuration for a System with InfiniBand Network Fabric
quorumdiskmgr --create --config --owner=oracle --group=dba --network-iface-list="ib0, ib1"
Example 2-6 Create a Quorum Disk Configuration for a System with RoCE Network Fabric
quorumdiskmgr --create --config --owner=oracle --group=dba --network-iface-list="re0, re1"
Parent topic: quorumdiskmgr Reference
2.14.4.4 Creating a Target (--create --target)
The --create --target
action creates a target that will be used to create the devices to add to the specified Oracle ASM disk group.
The --create --target
action creates a target that can be accessed by database servers with an RDMA Network Fabric IP address in the specified IP address list.
After a target is created, the asm-disk-group
, host-name
, and size
attributes cannot be changed.
Syntax
quorumdiskmgr --create --target --asm-disk-group asm_disk_group --visible-to ip_list
[--host-name host_name] [--size size]
Parameters
Parameter | Description |
---|---|
|
Specifies the Oracle ASM disk group to which the device created from the target will be added. The value of |
|
Specifies a list of RDMA Network Fabric IP addresses. Database servers with an RDMA Network Fabric IP address in the list will have access to the target. |
|
Specifies the host name of the database server on which This is an optional parameter. The default value is the host name of the database server on which |
|
Specifies the size of the target. This is an optional parameter. The default value is 128 MB. |
Example 2-7 Creating a Target For Oracle ASM Disk Group Devices
This example shows how to create a target for devices added to the DATAC1
disk group. That target is only visible to database servers that have an RDMA Network Fabric IP address of 192.168.10.45 or 192.168.10.46.
quorumdiskmgr --create --target --asm-disk-group=datac1 --visible-to="192.168.10.45, 192.168.10.46"
--host-name=db01
Parent topic: quorumdiskmgr Reference
2.14.4.5 Creating a Device (--create --device)
The --create --device
action creates devices by discovering and logging into targets on database servers with an RDMA Network Fabric IP address in the specified list of IP addresses.
The created devices will be automatically discovered by the Oracle ASM instance with the owner and group specified during configuration creation.
Syntax
quorumdiskmgr --create --device --target-ip-list target_ip_list
Parameters
-
--target-ip-list
: Specifies a list of RDMA Network Fabric IP addresses.quorumdiskmgr
discovers targets on database servers that have an IP address in the list, then logs in to those targets to create devices.
Example
Example 2-8 Creating Devices From a Target For an Oracle ASM Disk Group
This example shows how to create devices using targets on database servers that have an IP address of 192.168.10.45 or 192.168.10.46.
quorumdiskmgr --create --device --target-ip-list="192.168.10.45, 192.168.10.46"
Parent topic: quorumdiskmgr Reference
2.14.4.6 Listing Quorum Disk Configurations (--list --config)
The --list --config
action lists the quorum disk configurations.
Syntax
quorumdiskmgr --list --config
Sample Output
Example 2-9 Listing the quorum disk configuration on rack with InfiniBand Network Fabric
$ quorumdiskmgr --list --config
Owner: grid
Group: dba
ifaces: exadata_ib1 exadata_ib0
Example 2-10 Listing the quorum disk configuration on rack with RoCE Network Fabric
$ quorumdiskmgr --list --config
Owner: grid
Group: dba
ifaces: exadata_re1 exadata_re0
Parent topic: quorumdiskmgr Reference
2.14.4.7 Listing Targets (--list --target)
The --list --target
action lists the attributes of targets.
The target attributes listed include target name, size, host name, Oracle ASM disk group name, the list of IP addresses (a visible-to
IP address list) indicating which database servers have access to the target, and the list of IP addresses (a discovered-by
IP address list) indicating which database servers have logged into the target.
If an Oracle ASM disk group name is specified, the action lists all local targets created for the specified Oracle ASM disk group. Otherwise, the action lists all local targets created for quorum disks.
Syntax
quorumdiskmgr --list --target [--asm-disk-group asm_disk_group]
Parameters
--asm-disk-group
: Specifies the Oracle ASM disk group.quorumdiskmgr
displays all local targets for this Oracle ASM disk group. The value ofasm-disk-group
is not case-sensitive.
Example 2-11 Listing the Target Attributes for a Specific Oracle ASM Disk Group
This example shows how to list the attributes of the target for the DATAC1
disk group.
quorumdiskmgr --list --target --asm-disk-group=datac1
Name: iqn.2015-05.com.oracle:qd--datac1_db01
Size: 128 MB
Host name: DB01
ASM disk group name: DATAC1
Visible to: iqn.1988-12.com.oracle:192.168.10.23, iqn.1988-12.com.oracle:192.168.10.24,
iqn.1988-12.com.oracle:1b48248af770, iqn.1988-12.com.oracle:7a4a399566
Discovered by: 192.168.10.47, 192.168.10.46
Note:
For systems installed using a release prior to Oracle Exadata System Software 19.1.0, theName
might appear as iqn.2015-05.com.oracle:QD_DATAC1_DB01
. Also, the Visible to
field displays IP addresses instead of names.
Parent topic: quorumdiskmgr Reference
2.14.4.8 Listing Devices (--list --device)
The --list --device
action lists the attributes of devices, including device path, size, host name and ASM disk group name.
-
If only the Oracle ASM disk group name is specified, then the output includes all the devices that have been added to the Oracle ASM disk group.
-
If only the host name is specified, then the output includes all the devices created from the targets on the host.
-
If both an Oracle ASM disk group name and a host name are specified, then the output includes a single device created from the target on the host that has been added to the Oracle ASM disk group.
-
If neither an Oracle ASM disk group name or a host name is specified, then the output includes all quorum disk devices.
Syntax
quorumdiskmgr --list --device [--asm-disk-group asm_disk_group] [--host-name host_name]
Parameters
Parameter | Description |
---|---|
|
Specifies the Oracle ASM disk group to which devices have been added. The value of |
|
Specifies the host name of the database server from whose targets devices are created. The value of |
Example 2-12 Listing Device Attributes for an Oracle ASM Disk Group
This example shows how to list the attributes for devices used by the DATAC1
disk group.
$ quorumdiskmgr --list --device --asm-disk-group datac1
Device path: /dev/exadata_quorum/QD_DATAC1_DB01
Size: 128 MB
Host name: DB01
ASM disk group name: DATAC1
Device path: /dev/exadata_quorum/QD_DATAC1_DB02
Size: 128 MB
Host name: DB02
ASM disk group name: DATAC1
Parent topic: quorumdiskmgr Reference
2.14.4.9 Deleting Configurations (--delete --config)
The --delete --config
action deletes quorum disk configurations.
The configurations can only be deleted when there are no targets or devices present.
Syntax
quorumdiskmgr --delete --config
Parent topic: quorumdiskmgr Reference
2.14.4.10 Deleting Targets (--delete --target)
The --delete --target
action deletes the targets created for quorum disks on database servers.
If an Oracle ASM disk group name is specified, then this command deletes all the local targets created for the specified Oracle ASM disk group. Otherwise, this command deletes all local targets created for quorum disks.
Syntax
quorumdiskmgr --delete --target [--asm-disk-group asm_disk_group]
Parameters
-
--asm-disk-group
: Specifies the Oracle ASM disk group. Local targets created for this disk group will be deleted.The value of
asm-disk-group
is not case-sensitive.
Example 2-13 Deleting Targets Created for an Oracle ASM Disk Group
This example shows how to delete targets created for the DATAC1
disk group.
quorumdiskmgr --delete --target --asm-disk-group=datac1
Parent topic: quorumdiskmgr Reference
2.14.4.11 Deleting Devices (--delete --device)
The --delete --device
command deletes quorum disk devices.
-
If only an Oracle ASM disk group name is specified, then the command deletes all the devices that have been added to the Oracle ASM disk group.
-
If only a host name is specified, then the command deletes all the devices created from the targets on the host.
-
If both an Oracle ASM disk group name and a host name are specified, then the command deletes a single device created from the target on the host and that has been added to the Oracle ASM disk group.
-
If neither an Oracle ASM disk group name nor a host name is specified, then the command deletes all quorum disk devices.
Syntax
quorumdiskmgr --delete --device [--asm-disk-group asm_disk_group] [--host-name host_name]
Parameters
Parameter | Description |
---|---|
|
Specifies the Oracle ASM disk group whose device you want to delete. The value of |
|
Specifies the host name of the database server. Devices created from targets on this host will be deleted. The value of |
Example 2-14 Deleting Quorum Disk Devices Created from Targets on a Specific Host
This example shows how to delete all the quorum disk devices that were created from the targets on the host DB01
.
quorumdiskmgr --delete --device --host-name=db01
Parent topic: quorumdiskmgr Reference
2.14.4.12 Changing Owner and Group Values (--alter --config)
The --alter --config
action changes the owner and group configurations.
Syntax
quorumdiskmgr --alter --config --owner owner --group group
Parameters
Parameter | Description |
---|---|
|
Specifies the new owner for the quorum disk configuration. This parameter is optional. If not specified, the owner is unchanged. |
|
Specifies the new group for the quorum disk configuration. This parameter is optional. If not specified, the group is unchanged. |
Example 2-15 Changes the Owner and Group Configuration for Quorum Disk Devices
This example shows how to change the assigned owner and group for quorum disk devices.
quorumdiskmgr --alter --config --owner=grid --group=dba
Parent topic: quorumdiskmgr Reference
2.14.4.13 Changing the RDMA Network Fabric IP Addresses (--alter --target)
The --alter --target
command changes the RDMA Network Fabric IP addresses of the database servers that have access to the local target created for the specified Oracle ASM disk group.
Syntax
quorumdiskmgr --alter --target --asm-disk-group asm_disk_group --visible-to ip_list
Parameters
Parameter | Description |
---|---|
|
Specifies the Oracle ASM disk group to which the device created from the target will be added. The value of |
|
Specifies a list of RDMA Network Fabric IP addresses. Database servers with an RDMA Network Fabric IP address in the list will have access to the target. |
Example 2-16 Changing the RDMA Network Fabric IP Addresses for Accessing Targets
This example shows how to change the RDMA Network Fabric IP address list that determines which database servers have access to the local target created for DATAC1
disk group
quorumdiskmgr --alter --target --asm-disk-group=datac1 --visible-to="192.168.10.45, 192.168.10.47
Parent topic: quorumdiskmgr Reference
2.14.5 Add Quorum Disks to Database Nodes
You can add quorum disks to database nodes on an Oracle Exadata Rack with fewer than five storage servers that contains a high redundancy disk group.
Oracle strongly recommends having quorum disks in all high redundancy disk groups with less than five failure groups. Having five quorum disks is important to mirror ASM metadata in any high redundancy disk group, and not just for the disk group housing the voting files.
The example in this section creates quorum disks for an Oracle Exadata Rack that has two database servers: db01
and
db02
.
Typically, there are two RDMA Network Fabric ports on each database server:
- For systems with InfiniBand Network Fabric the
ports are:
ib0
andib1
- For systems with RoCE Network Fabric the ports
are:
re0
andre1
On each cluster node, the network interfaces to be used for communication with the iSCSI devices can be found using the following command:
$ oifcfg getif | grep cluster_interconnect | awk '{print $1}'
The IP address of each interface can be found using the following command:
# ip addr show interface_name
The RDMA Network Fabric IP addresses for this example are as follows:
On db01
:
- Network interface:
ib0
orre0
, IP address: 192.168.10.45 - Network interface:
ib1
orre1
, IP address: 192.168.10.46
On db02
:
- Network interface:
ib0
orre0
, IP address: 192.168.10.47 - Network interface:
ib1
orre1
, IP address: 192.168.10.48
The Oracle ASM disk group to which the quorum disks will be added is DATAC1
. The Oracle ASM owner is grid
, and the user group is dba
.
In this example, we will move the voting files from the normal redundancy disk group
RECOC1
to DATAC1
after it has been augmented with quorum
disks to yield five failure groups. The example shows the cluster voting files moving from
RECOC1
to DATAC1
, but there is no need to relocate voting
files if you are just adding quorum disks to a high redundancy disk group and you already have
your voting files in some other high redundancy disk group.
Initially, the voting files reside on a normal redundancy disk group RECOC1
:
$ Grid_home/bin/crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 21f5507a28934f77bf3b7ecf88b26c47 (o/192.168.76.187;192.168.76.188/RECOC1_CD_00_celadm12) [RECOC1]
2. ONLINE 387f71ee81f14f38bfbdf0693451e328 (o/192.168.76.189;192.168.76.190/RECOC1_CD_00_celadm13) [RECOC1]
3. ONLINE 6f7fab62e6054fb8bf167108cdbd2f64 (o/192.168.76.191;192.168.76.192/RECOC1_CD_00_celadm14) [RECOC1]
Located 3 voting disk(s).
2.14.6 Recreate Quorum Disks
In certain circumstances, you might need to recreate a quorum disk.
-
When recreating a guest domU
-
If you deleted the quorum disks without first dropping the quorum disks from the Oracle ASM disk group
Related Topics
Parent topic: Managing Quorum Disks for High Redundancy Disk Groups
2.14.7 Use Cases
The following topics describe various configuration cases when using the quorum disk manager utility.
- New Deployments on Oracle Exadata 12.1.2.3.0 or Later
- Upgrading to Oracle Exadata Release 12.1.2.3.0 or Later
- Downgrading to a Pre-12.1.2.3.0 Oracle Exadata Release
- Managing Quorum Disks When Changing Elastic Configurations
When modifying the elastic configuration of an Oracle Exadata Rack, you might have to perform additional actions if you use quorum disks.
Parent topic: Managing Quorum Disks for High Redundancy Disk Groups
2.14.7.1 New Deployments on Oracle Exadata 12.1.2.3.0 or Later
For new deployments on Oracle Exadata release 12.1.2.3.0 and above, OEDA implements this feature by default when all of the following requirements are satisfied:
-
The system has at least two database nodes and fewer than five storage servers.
-
You are running OEDA release February 2016 or later.
-
You meet the software requirements listed in Software Requirements for Quorum Disk Manager.
-
Oracle Database is 11.2.0.4 and above.
-
The system has at least one high redundancy disk group.
If the system has three storage servers in place, then two quorum disks will be created on the first two database nodes of the cluster picked by OEDA.
If the system has four storage servers in place, then one quorum disk will be created on the first database node picked by OEDA.
Parent topic: Use Cases
2.14.7.2 Upgrading to Oracle Exadata Release 12.1.2.3.0 or Later
If the target Exadata system has fewer than five storage servers, at least one high redundancy disk group, and two or more database nodes, you can implement this feature manually using quorumdiskmgr
.
Related Topics
Parent topic: Use Cases
2.14.7.3 Downgrading to a Pre-12.1.2.3.0 Oracle Exadata Release
Rolling back to a pre-12.1.2.3.0 Oracle Exadata release, which does not support quorum disks, from a release that supports quorum disks, which is any release 12.1.2.3.0 and later, requires quorum disk configuration to be removed if the environment has quorum disk implementation in place. You need to remove the quorum disk configuration before performing the Exadata software rollback.
To remove quorum disk configuration, perform these steps:
-
Ensure there is at least one normal redundancy disk group in place. If not, create one.
-
Relocate the voting files to a normal redundancy disk group:
$GI_HOME/bin/crsctl replace votedisk +normal_redundancy_diskgroup
-
Drop the quorum disks from ASM. Run the following command for each quorum disk:
SQL> alter diskgroup diskgroup_name drop quorum disk quorum_disk_name force;
Wait for the rebalance operation to complete. You can tell it is complete when
v$asm_operation
returns no rows for the disk group. -
Delete the quorum devices. Run the following command from each database node that has quorum disks in place:
/opt/oracle.SupportTools/quorumdiskmgr --delete --device [--asm-disk-group asm_disk_group] [--host-name host_name]
-
Delete the targets. Run the following command from each database node that has quorum disks in place:
/opt/oracle.SupportTools/quorumdiskmgr --delete --target [--asm-disk-group asm_disk_group]
-
Delete the configuration. Run the following command from each database node that has quorum disks in place:
/opt/oracle.SupportTools/quorumdiskmgr --delete –config
Parent topic: Use Cases
2.14.7.4 Managing Quorum Disks When Changing Elastic Configurations
When modifying the elastic configuration of an Oracle Exadata Rack, you might have to perform additional actions if you use quorum disks.
- Adding a Database Node if Using Quorum Disks
- Removing a Database Node When Using Quorum Disks
If database node being removed hosted a quorum disk containing a voting file and there are fewer than five storage servers in the Oracle Real Application Clusters (Oracle RAC) cluster, then a quorum disk must be created on a different database node before the database node is removed. - Adding an Oracle Exadata Storage Server and Expanding an Existing High Redundancy Disk Group
- Removing an Oracle Exadata Storage Server When Using Quorum Disks
Parent topic: Use Cases
2.14.7.4.1 Adding a Database Node if Using Quorum Disks
If the existing Oracle Real Application Clusters (Oracle RAC) cluster has fewer than two database nodes and fewer than five storage servers, and the voting files are not stored in a high redundancy disk group, then Oracle recommends adding quorum disks to the database node(s) and relocating the voting files to a high redundancy disk group.
Note:
The requirements listed in "Software Requirements for Quorum Disk Manager" must be met.If the existing Oracle RAC cluster already has quorum disks in place, the quorum disks need to be made visible to the newly added node prior to adding the node to the Oracle RAC cluster using the addnode.sh
procedure.
Related Topics
2.14.7.4.2 Removing a Database Node When Using Quorum Disks
If database node being removed hosted a quorum disk containing a voting file and there are fewer than five storage servers in the Oracle Real Application Clusters (Oracle RAC) cluster, then a quorum disk must be created on a different database node before the database node is removed.
If the database node being removed did not host a quorum disk, then no action is required. Otherwise, use these steps to create a quorum disk on a database node that does not currently host a quorum disk.
2.14.7.4.3 Adding an Oracle Exadata Storage Server and Expanding an Existing High Redundancy Disk Group
When you add a storage server that uses quorum disks, Oracle recommends relocating a voting file from a database node to the newly added storage server.
-
Add the Exadata storage server. See Adding a Cell Node for details.
In the example below, the new storage server added is called "celadm04".
-
After the storage server is added, verify the new fail group from
v$asm_disk
.SQL> select distinct failgroup from v$asm_disk; FAILGROUP ------------------------------ ADM01 ADM02 CELADM01 CELADM02 CELADM03 CELADM04
-
Verify at least one database node has a quorum disk containing a voting file.
$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 834ee5a8f5054f12bf47210c51ecb8f4 (o/192.168.12.125;192.168.12.126/DATAC5_CD_00_celadm01) [DATAC5] 2. ONLINE f4af2213d9964f0bbfa30b2ba711b475 (o/192.168.12.127;192.168.12.128/DATAC5_CD_00_celadm02) [DATAC5] 3. ONLINE ed61778df2964f37bf1d53ea03cd7173 (o/192.168.12.129;192.168.12.130/DATAC5_CD_00_celadm03) [DATAC5] 4. ONLINE bfe1c3aa91334f16bf78ee7d33ad77e0 (/dev/exadata_quorum/QD_DATAC5_ADM01) [DATAC5] 5. ONLINE a3a56e7145694f75bf21751520b226ef (/dev/exadata_quorum/QD_DATAC5_ADM02) [DATAC5] Located 5 voting disk(s).
The example above shows there are two quorum disks with voting files on two database nodes.
-
Drop one of the quorum disks.
SQL> alter diskgroup datac5 drop quorum disk QD_DATAC5_ADM01;
The voting file on the dropped quorum disk will be relocated automatically to the newly added storage server by the Grid Infrastructure as part of the voting file refresh. You can verify this as follows:
$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 834ee5a8f5054f12bf47210c51ecb8f4 (o/192.168.12.125;192.168.12.126/DATAC5_CD_00_celadm01) [DATAC5] 2. ONLINE f4af2213d9964f0bbfa30b2ba711b475 (o/192.168.12.127;192.168.12.128/DATAC5_CD_00_celadm02) [DATAC5] 3. ONLINE ed61778df2964f37bf1d53ea03cd7173 (o/192.168.12.129;192.168.12.130/DATAC5_CD_00_celadm03) [DATAC5] 4. ONLINE a3a56e7145694f75bf21751520b226ef (/dev/exadata_quorum/QD_DATAC5_ADM02) [DATAC5] 5. ONLINE ab5aefd60cf84fe9bff6541b16e33787 (o/192.168.12.131;192.168.12.132/DATAC5_CD_00_celadm04) [DATAC5]
2.14.7.4.4 Removing an Oracle Exadata Storage Server When Using Quorum Disks
If removing a storage server results in the number of storage servers being used by the Oracle RAC cluster to fewer than five, and the voting files reside in a high redundancy disk group, then Oracle recommends adding quorum disks to the database nodes, if not in place already.
Prior to removing the storage server, add the quorum disks so that five copies of the voting files are available immediately after removing the storage server.
Related Topics
2.14.8 Reconfigure Quorum Disk After Restoring a Database Server
After restoring a database server, lvdisplay
shows the quorum disk was not restored.
When you restore a database server, Exadata image rescue mode restores the layout of disks and file systems, with the exception of custom partitions, including quorum disks. These files must be recreated after being restored from backup.
The logical volumes created for quorum disks are in /dev/VGExaDb
and have the name-prefix LVDbVd*
.
Parent topic: Managing Quorum Disks for High Redundancy Disk Groups
2.15 Using vmetrics
The vmetrics package enables you to display system statistics gathered by the vmetrics service.
- About the vmetrics Package
The vmetrics service collects the statistics required for SAP monitoring of Oracle VM domains. - Installing and Starting the vmetrics Service
- Files in the vmetrics Package
- Displaying the Statistics
- Adding Metrics to vmetrics
Parent topic: Maintaining Exadata Database Servers
2.15.1 About the vmetrics Package
The vmetrics service collects the statistics required for SAP monitoring of Oracle VM domains.
You can access the system statistics from the management domain (dom0) or the user domain (domU). The vmetrics service runs on the management domain, collects the statistics, and pushes them to the xenstore. This allows the user domains to access the statistics.
System statistics collected by the vmetrics service are shown below, with sample values:
com.sap.host.host.VirtualizationVendor=Oracle Corporation;
com.sap.host.host.VirtProductInfo=Oracle VM 3;
com.sap.host.host.PagedInMemory=0;
com.sap.host.host.PagedOutMemory=0;
com.sap.host.host.PageRates=0;
com.sap.vm.vm.uuid=2b80522b-060d-47ee-8209-2ab65778eb7e;
com.sap.host.host.HostName=sc10adm01.example.com;
com.sap.host.host.HostSystemInfo=sc10adm01;
com.sap.host.host.NumberOfPhysicalCPUs=24;
com.sap.host.host.NumCPUs=4;
com.sap.host.host.TotalPhyMem=98295;
com.sap.host.host.UsedVirtualMemory=2577;
com.sap.host.host.MemoryAllocatedToVirtualServers=2577;
com.sap.host.host.FreeVirtualMemory=29788;
com.sap.host.host.FreePhysicalMemory=5212;
com.sap.host.host.TotalCPUTime=242507.220000;
com.sap.host.host.Time=1453150151;
com.sap.vm.vm.PhysicalMemoryAllocatedToVirtualSystem=8192;
com.sap.vm.vm.ResourceMemoryLimit=8192;
com.sap.vm.vm.TotalCPUTime=10160.1831404;
com.sap.vm.vm.ResourceProcessorLimit=4;
Parent topic: Using vmetrics
2.15.2 Installing and Starting the vmetrics Service
To install the vmetrics service, run the install.sh
script as the root user on dom0:
[root@scac10adm01]# cd /opt/oracle.SupportTools/vmetrics [root@scac10adm01]# ./install.sh
The install.sh
script verifies that it is running on dom0, stops any vmetrics services currently running, copies the package files to /opt/oracle.vmetrics
, and copies vmetrics.svc
to /etc/init.d
.
To start the vmetrics service on dom0, run the following command as the root user on dom0:
[root@scac10adm01 vmetrics]# service vmetrics.svc start
The commands to gather the statistics are run every 30 seconds.
Parent topic: Using vmetrics
2.15.3 Files in the vmetrics Package
The vmetrics package contains the following files:
File | Description |
---|---|
|
This file installs the package. |
|
This script reads the statistics from the xenstore and displays them in XML format. |
|
This Python script runs the system commands and uploads them to the xenstore. The system commands are listed in the |
|
This XML file specifies the metrics that the dom0 should push to the xenstore, and the system commands to run for each metric. |
|
The |
Parent topic: Using vmetrics
2.15.4 Displaying the Statistics
Once the statistics have been pushed to the xenstore, you can view the statistics on dom0 and domU by running either of the following commands:
Note:
On domU's, ensure that the xenstoreprovider
and ovmd
packages are installed.
xenstoreprovider
is the library which communicates with the ovmapi kernel infrastructure.
ovmd
is a daemon that handles configuration and reconfiguration events and provides a mechanism to send/receive messages between the VM and the Oracle VM Manager.
The following command installs the necessary packages on Oracle Linux 5 and 6 to support the Oracle VM API.
# yum install ovmd xenstoreprovider
-
The
/usr/sbin/ovmd -g vmhost
command displays the statistics on one line. Thesed
command breaks up the line into multiple lines, one statistic per line. You need to run this command as the root user.root@scac10db01vm04 ~]# /usr/sbin/ovmd -g vmhost |sed 's/; */;\n/g;s/:"/:"\n/g' com.sap.host.host.VirtualizationVendor=Oracle Corporation; com.sap.host.host.VirtProductInfo=Oracle VM 3; com.sap.host.host.PagedInMemory=0; com.sap.host.host.PagedOutMemory=0; com.sap.host.host.PageRates=0; com.sap.vm.vm.uuid=2b80522b-060d-47ee-8209-2ab65778eb7e; com.sap.host.host.HostName=scac10adm01.example.com; com.sap.host.host.HostSystemInfo=scac10adm01; com.sap.host.host.NumberOfPhysicalCPUs=24; com.sap.host.host.NumCPUs=4; ...
-
The
vm-dump-metrics
command displays the metrics in XML format.[root@scac10db01vm04 ~]# ./vm-dump-metrics <metrics> <metric type='real64' context='host'> <name>TotalCPUTime</name> <value>242773.600000</value> </metric> <metric type='uint64' context='host'> <name>PagedOutMemory</name> <value>0</value> </metric> ...
Note that you have copy the
vm-dump-metrics
command to the domU's from which you want to run the command.
Parent topic: Using vmetrics
2.15.5 Adding Metrics to vmetrics
You can add your own metric to be collected by the vmetrics service.
Parent topic: Using vmetrics
2.16 Using FIPS mode
On database servers running Oracle Linux 7 or later, you can enable the kernel to run in FIPS mode.
Starting with Oracle Exadata System Software release 20.1.0, you can enable and disable the Federal Information Processing Standards (FIPS) compatibility mode on Oracle Exadata database servers running Oracle Linux 7 or later.
After you enable or disable FIPS mode, you must reboot the server for the action to take effect.
To enable, disable, and get status information about FIPS mode, use the utility at
/opt/oracle.cellos/host_access_control
with the
fips-mode
option:
-
To display the current FIPS mode setting, run:
# /opt/oracle.cellos/host_access_control fips-mode --status
-
To enable FIPS mode, run:
# /opt/oracle.cellos/host_access_control fips-mode --enable
Then, reboot the server to finalize the action.
-
To disable FIPS mode, run:
# /opt/oracle.cellos/host_access_control fips-mode --disable
Then, reboot the server to finalize the action.
-
To display information about FIPS mode, run:
# /opt/oracle.cellos/host_access_control fips-mode --info
The following example shows the typical command sequence and command output for enabling and disabling FIPS mode on a server.
# /opt/oracle.cellos/host_access_control fips-mode --status
[2020-04-14 09:19:45 -0700] [INFO] [IMG-SEC-1101] FIPS mode is disabled
# /opt/oracle.cellos/host_access_control fips-mode --enable
[2020-04-14 09:30:10 -0700] [INFO] [IMG-SEC-1107] Using only FIPS compliant
SSH host keys and sshd configuration updated in /etc/ssh/sshd_config
[2020-04-14 09:30:10 -0700] [INFO] [IMG-SEC-1103] FIPS mode is set to
enabled. A reboot is required to effect this change.
# /opt/oracle.cellos/host_access_control fips-mode --status
[2020-04-14 09:30:14 -0700] [INFO] [IMG-SEC-1101] FIPS mode is configured but
not activated. A reboot is required to activate.
# reboot
...
# /opt/oracle.cellos/host_access_control fips-mode --status
[2020-04-14 09:23:15 -0700] [INFO] [IMG-SEC-1103] FIPS mode is configured and
active
# /opt/oracle.cellos/host_access_control fips-mode --disable
[2020-04-14 09:40:37 -0700] [INFO] [IMG-SEC-1103] FIPS mode is set to
disabled. A reboot is required to effect this change.
# /opt/oracle.cellos/host_access_control fips-mode --status
[2020-04-14 09:40:37 -0700] [INFO] [IMG-SEC-1103] FIPS mode is disabled but
is active. A reboot is required to deactivate FIPS mode.
# reboot
...
# /opt/oracle.cellos/host_access_control fips-mode --status
[2020-04-14 09:46:22 -0700] [INFO] [IMG-SEC-1101] FIPS mode is disabled
Parent topic: Maintaining Exadata Database Servers
2.17 Exadata Database Server LED Indicator Descriptions
The indicator LEDs on Oracle Exadata database servers help you to verify the system status and identify components that require servicing.
For information about the various indicator LEDs on Oracle Exadata database servers, see the section entitled Troubleshooting Using the Server Front and Back Panel Status Indicators in the server service manual for your system.
See Related Documentation for a list of the server service manuals.
Additionally, on Oracle Exadata database servers, the Do Not Service LED is not illuminated by Exadata software. The Do Not Service LED is included only on Oracle Exadata X7-2 and later database servers.
Parent topic: Maintaining Exadata Database Servers
2.18 Exadata Database Server Images
The Exadata database server models have different external layouts and physical appearance.
- Oracle Server X9-2 Database Server Images
Oracle Server X9-2 is used as the database server in Oracle Exadata X9M-2. - Oracle Server X8-2 Database Server Images
Oracle Server X8-2 is used as the database server in Oracle Exadata X8M-2 and X8-2. - Oracle Server X7-2 Oracle Database Server Images
- Oracle Server X6-2 Oracle Database Server Images
- Oracle Server X5-2 Oracle Database Server Images
- Sun Server X4-2 Oracle Database Server Images
- Sun Server X3-2 Oracle Database Server Images
- Sun Fire X4170 M2 Oracle Database Server Images
- Sun Fire X4170 Oracle Database Server Images
- Oracle Server X8-8 Database Server Images
Oracle Server X8-8 is used as the database server in Oracle Exadata X9M-8, X8M-8, and X8-8. - Oracle Server X7-8 Oracle Database Server Images
- Oracle Server X5-8 and X6-8 Oracle Database Server Images
- Sun Server X4-8 Oracle Database Server Images
- Sun Server X2-8 Oracle Database Server Images
- Sun Fire X4800 Oracle Database Server Images
Parent topic: Maintaining Exadata Database Servers
2.18.1 Oracle Server X9-2 Database Server Images
Oracle Server X9-2 is used as the database server in Oracle Exadata X9M-2.
The following image shows the front view of Oracle Server X9-2 Database Servers.
Figure 2-3 Front View of Oracle Server X9-2 Database Servers

Description of "Figure 2-3 Front View of Oracle Server X9-2 Database Servers"
The following image shows a rear view of Oracle Server X9-2. This image shows a server with two dual-port 25 Gb/s network interface cards (in PCI slot 1 and slot3).
Figure 2-4 Rear View of Oracle Server X9-2 Database Servers

Description of "Figure 2-4 Rear View of Oracle Server X9-2 Database Servers"
Parent topic: Exadata Database Server Images
2.18.2 Oracle Server X8-2 Database Server Images
Oracle Server X8-2 is used as the database server in Oracle Exadata X8M-2 and X8-2.
The following image shows the front view of Oracle Server X8-2 Database Servers.
Figure 2-5 Front View of Oracle Server X8-2 Database Servers

Description of "Figure 2-5 Front View of Oracle Server X8-2 Database Servers"
The following image shows the rear view of the Oracle Server.
Figure 2-6 Rear View of Oracle Server X8-2 Database Servers

Description of "Figure 2-6 Rear View of Oracle Server X8-2 Database Servers"
Parent topic: Exadata Database Server Images
2.18.3 Oracle Server X7-2 Oracle Database Server Images
The following image shows the front view of Oracle Server X7-2 Oracle Database Server.
Figure 2-7 Front View of Oracle Server X7-2 Oracle Database Server

Description of "Figure 2-7 Front View of Oracle Server X7-2 Oracle Database Server"
The following image shows the rear view of Oracle Server.
Figure 2-8 Rear View of X7-2 Oracle Database Server

Description of "Figure 2-8 Rear View of X7-2 Oracle Database Server"
Parent topic: Exadata Database Server Images
2.18.4 Oracle Server X6-2 Oracle Database Server Images
The following image shows the front view of Oracle Server X6-2 Oracle Database Server.
Figure 2-9 Front View of Oracle Server X6-2 Oracle Database Server

Description of "Figure 2-9 Front View of Oracle Server X6-2 Oracle Database Server"
The following image shows the rear view of Oracle Server X6-2 Oracle Database Server.
The top hard disk drives are, from left to right HDD1, and HDD3. The lower drives are, from left to right, HDD0, and HDD2.
Figure 2-10 Rear View of Oracle Server X6-2 Oracle Database Server

Description of "Figure 2-10 Rear View of Oracle Server X6-2 Oracle Database Server"
Parent topic: Exadata Database Server Images
2.18.5 Oracle Server X5-2 Oracle Database Server Images
The following image shows the front view of Oracle Server X5-2 Oracle Database Server.
Figure 2-11 Front View of Oracle Server X5-2 Oracle Database Server

Description of "Figure 2-11 Front View of Oracle Server X5-2 Oracle Database Server"
The following image shows the rear view of Oracle Server X5-2 Oracle Database Server.
The top hard disk drives are, from left to right HDD1, and HDD3. The lower drives are, from left to right, HDD0, and HDD2.
Figure 2-12 Rear View of Oracle Server X5-2 Oracle Database Server

Description of "Figure 2-12 Rear View of Oracle Server X5-2 Oracle Database Server"
Parent topic: Exadata Database Server Images
2.18.6 Sun Server X4-2 Oracle Database Server Images
The following image shows the front view of Sun Server X4-2 Oracle Database Server.
Figure 2-13 Front View of Sun Server X4-2 Oracle Database Server

Description of "Figure 2-13 Front View of Sun Server X4-2 Oracle Database Server"
The following image shows the rear view of Sun Server X4-2 Oracle Database Server.
Figure 2-14 Rear View of Sun Server X4-2 Oracle Database Server

Description of "Figure 2-14 Rear View of Sun Server X4-2 Oracle Database Server"
Parent topic: Exadata Database Server Images
2.18.7 Sun Server X3-2 Oracle Database Server Images
The following image shows the front view of Sun Server X3-2 Oracle Database Server.
Figure 2-15 Front View of Sun Server X3-2 Oracle Database Server

Description of "Figure 2-15 Front View of Sun Server X3-2 Oracle Database Server"
The following image shows the rear view of Sun Server X3-2 Oracle Database Server.
Figure 2-16 Rear View of Sun Server X3-2 Oracle Database Server

Description of "Figure 2-16 Rear View of Sun Server X3-2 Oracle Database Server"
Parent topic: Exadata Database Server Images
2.18.8 Sun Fire X4170 M2 Oracle Database Server Images
The following image shows the front view of Sun Fire X4170 M2 Oracle Database Server.
Figure 2-17 Front View of Sun Fire X4170 M2 Oracle Database Server

Description of "Figure 2-17 Front View of Sun Fire X4170 M2 Oracle Database Server"
-
Hard disk drives. The top drives are, from left to right HDD1, and HDD3. The lower drives are, from left to right, HDD0, and HDD2.
The following image shows the rear view of Sun Fire X4170 M2 Oracle Database Server.
Figure 2-18 Rear View of Sun Fire X4170 M2 Oracle Database Server

Description of "Figure 2-18 Rear View of Sun Fire X4170 M2 Oracle Database Server"
-
InfiniBand host channel adapter
-
Gigabit Ethernet ports
Parent topic: Exadata Database Server Images
2.18.9 Sun Fire X4170 Oracle Database Server Images
The following image shows the front view of Sun Fire X4170 Oracle Database Server.
Figure 2-19 Front View of Sun Fire X4170 Oracle Database Server

Description of "Figure 2-19 Front View of Sun Fire X4170 Oracle Database Server"
-
Hard disk drives. The top drives are, from left to right HDD1, and HDD3. The lower drives are, from left to right, HDD0, and HDD2.
The following image shows the rear view of Sun Fire X4170 Oracle Database Server.
Figure 2-20 Rear View of Sun Fire X4170 Oracle Database Server

Description of "Figure 2-20 Rear View of Sun Fire X4170 Oracle Database Server"
-
RDMA Network Fabric host channel adapter
-
Gigabit Ethernet ports
Parent topic: Exadata Database Server Images
2.18.10 Oracle Server X8-8 Database Server Images
Oracle Server X8-8 is used as the database server in Oracle Exadata X9M-8, X8M-8, and X8-8.
The following image shows the front view of Oracle Server X8-8 Database Server.
Figure 2-21 Front View of Oracle Database Server X8-8

Description of "Figure 2-21 Front View of Oracle Database Server X8-8 "
The following image shows the rear view of Oracle Database Server X8-8.
Figure 2-22 Rear View of Oracle Database Server X8-8

Description of "Figure 2-22 Rear View of Oracle Database Server X8-8"
Parent topic: Exadata Database Server Images
2.18.11 Oracle Server X7-8 Oracle Database Server Images
The following image shows the front view of Oracle Server X7-8 Oracle Database Server.
Figure 2-23 Front View of Oracle Server X7-8 Oracle Database Server

Description of "Figure 2-23 Front View of Oracle Server X7-8 Oracle Database Server"
The following image shows the rear view of Oracle Server X7-8 Oracle Database Server.
Figure 2-24 Rear View of Oracle Server X7-8 Oracle Database Server

Description of "Figure 2-24 Rear View of Oracle Server X7-8 Oracle Database Server"
Parent topic: Exadata Database Server Images
2.18.12 Oracle Server X5-8 and X6-8 Oracle Database Server Images
The following image shows the front view of Oracle Server X5-8 Oracle Database Server.
Figure 2-25 Front View of Oracle Server X5-8 Oracle Database Server

Description of "Figure 2-25 Front View of Oracle Server X5-8 Oracle Database Server"
The following image shows the back view of Oracle Server X5-8 Oracle Database Server.
Figure 2-26 Back View of Oracle Server X5-8 Oracle Database Server

Description of "Figure 2-26 Back View of Oracle Server X5-8 Oracle Database Server"
Parent topic: Exadata Database Server Images
2.18.13 Sun Server X4-8 Oracle Database Server Images
The following image shows the front view of Sun Server X4-8 Oracle Database Server.
Figure 2-27 Front View of Sun Server X4-8 Oracle Database Server

Description of "Figure 2-27 Front View of Sun Server X4-8 Oracle Database Server"
The following image shows the rear view of Sun Server X4-8 Oracle Database Server.
Figure 2-28 Rear View of Sun Server X4-8 Oracle Database Server

Description of "Figure 2-28 Rear View of Sun Server X4-8 Oracle Database Server"
Parent topic: Exadata Database Server Images
2.18.14 Sun Server X2-8 Oracle Database Server Images
The following image shows the front view of Sun Server X2-8 Oracle Database Server.
Figure 2-29 Front View of Sun Server X2-8 Oracle Database Server

Description of "Figure 2-29 Front View of Sun Server X2-8 Oracle Database Server"
-
Power supplies.
-
Hard disk drives. The top drives are, from left to right, XL4, XL5, XL6, and XL7. The lower drives are, from left to right, XL0, XL1, XL2, and XL3.
-
CPU modules. The modules are, from bottom to top, BL0, BL1, BL2, and BL3.
The following image shows the rear view of Sun Fire X4800 Oracle Database Server.
Figure 2-30 Rear View of Sun Server X2-8 Oracle Database Server

Description of "Figure 2-30 Rear View of Sun Server X2-8 Oracle Database Server"
-
Fan modules.
-
Network Express Module.
-
InfiniBand EM (CX2) dual port PCI Express modules.
Parent topic: Exadata Database Server Images
2.18.15 Sun Fire X4800 Oracle Database Server Images
The following image shows the front view of Sun Fire X4800 Oracle Database Server.
Figure 2-31 Front View of Sun Fire X4800 Oracle Database Server

Description of "Figure 2-31 Front View of Sun Fire X4800 Oracle Database Server"
-
Power supplies.
-
Hard disk drives. The top drives are, from left to right, XL4, XL5, XL6, and XL7. The lower drives are, from left to right, XL0, XL1, XL2, and XL3.
-
CPU modules. The modules are, from bottom to top, BL0, BL1, BL2, and BL3.
The following image shows the rear view of Sun Fire X4800 Oracle Database Server.
Figure 2-32 Rear View of Sun Fire X4800 Oracle Database Server

Description of "Figure 2-32 Rear View of Sun Fire X4800 Oracle Database Server"
-
Fan modules.
-
Network Express Module.
-
InfiniBand EM (CX2) dual port PCI Express modules.
Parent topic: Exadata Database Server Images