2 Configuring Oracle Exadata System Software
This chapter describes the major steps to configure Oracle Exadata System Software.
You determine the number of disks and cells needed in the grid based on your requirements for capacity, performance, and redundancy.
Hardware and software have already been installed for the storage servers. The procedures in this chapter describe how to configure a storage cell for use with the Oracle Database and Oracle Automatic Storage Management (Oracle ASM) instances.
Note:
Modifications to the Oracle Exadata Storage Server hardware or software are not supported. Only the documented network interfaces on the Oracle Exadata Storage Server should be used for all connectivity including management and storage traffic. Additional network interfaces should not be used.- Understanding Oracle Exadata System Software Release Numbering
The Oracle Exadata System Software release number is related to the Oracle Database release number. - Understanding Oracle Exadata Storage Server Configuration
Oracle Exadata Storage Server ships with all hardware and software pre-installed, however you must configure Oracle Exadata System Software for your environment. - Network Configuration and IP Addresses Recommendations
Follow the recommendation network configuration for Oracle Exadata Storage Server. - Assigning IP Addresses for Oracle Exadata
This topic summarizes the Oracle Exadata network preparation before installing the new storage server. - Configuring Oracle Exadata System Software for Your Location
Customize the software installation for your environment. - Configuring Cells, Cell Disks, and Grid Disks with CellCLI
Configure the cells, cell disks and grid disks for each new Oracle Exadata Storage Server. - Creating Flash Cache and Flash Grid Disks
Oracle Exadata Storage Servers are equipped with flash disks. These flash disks can be used to create flash grid disks to store frequently accessed data. - Setting Up Configuration Files for a Database Server Host
After Oracle Exadata Storage Server is configured, the database server host must be configured with thecellinit.ora
and thecellip.ora
files to use the cell. - Understanding Automated Cell Maintenance
The Management Server (MS) includes a file deletion policy based on the date.
2.1 Understanding Oracle Exadata System Software Release Numbering
The Oracle Exadata System Software release number is related to the Oracle Database release number.
The Oracle Exadata System Software release number matches the highest Oracle Grid Infrastructure and Oracle Database version it supports. For example, the highest version Oracle Exadata System Software release 18 supports is Oracle Grid Infrastructure and Oracle Database release 18. The highest version Oracle Exadata System Software release 12.2 supports is Oracle Grid Infrastructure and Oracle Database release 12.2.0.1.
Release 18c and Later Numbering
The Oracle Exadata System Software release that followed release 12.2.1.1.8 was renamed to 18.1.0 and a new numbering scheme for the Oracle Exadata System Software was implemented. Instead of a legacy nomenclature such as 12.2.1.1.5, a three field format consisting of: Year.Update.Revision is used, for example 18.1.0. This new numbering scheme allows you to clearly determine:
- The annual release designation of the software
- The latest software update, which can contain new features
- The latest software revision, which includes security and software fixes
If there are new features or new hardware supported, a new software update will be release during the year, for example, 19.2. To allow you to keep current on just security-related and other software fixes after your feature environment becomes stable, software revisions are made available approximately once a month, for example 19.1.3.
Numbering for Releases Prior to 18c
- The first two digits of the Oracle Exadata System Software release number represent the major Oracle Database release number, such as Oracle Database 12c Release 1 (12.1). Oracle Exadata System Software release 12.1 is compatible with all Oracle Database 12c Release 1 (12.1) releases.
- The third digit usually represents the component-specific Oracle Database release number. This digit usually matches the fourth digit of the complete release number, such as 12.1.0.1.0 for the current release of Oracle Database.
- The last two digits represent the Oracle Exadata System Software release.
2.2 Understanding Oracle Exadata Storage Server Configuration
Oracle Exadata Storage Server ships with all hardware and software pre-installed, however you must configure Oracle Exadata System Software for your environment.
This topic provides a general overview of the configuration tasks. Subsequent topics describe the actual procedures.
- Assign IP Addresses for the Storage Servers
As part of configuring the storage servers, you assign IP addresses to connect the storage to the various networks. - Configure the Storage Server for Your Location
Configure the storage server for use within your company. - Configure the Storage Cell
Use theALTER CELL
command to configure the cell. - Verify Storage Cell Attributes
Use theLIST CELL DETAIL
command to verify the storage cell attributes. - Create the Storage Cell Disks
Use theCREATE CELLDISK
command to create the cell disks. - Create the Grid Disks
Use theCREATE GRIDDISK
command to create the grid disks. The size of the disks depends on your requirements. - Create the PMEM Cache
Persistent memory (PMEM) is only available in Exadata X8M and X9M storage server models. - Create the Flash Disks and Flash Cache
By default, theCREATE CELL
command creates flash cell disks on all flash disks. The command then creates Exadata Smart Flash Cache on the flash cell disks. - Configure Oracle Auto Service Request (ASR)
Oracle Auto Service Request (ASR) for Oracle Exadata automatically creates service requests by detecting common hardware faults.
Parent topic: Configuring Oracle Exadata System Software
2.2.1 Assign IP Addresses for the Storage Servers
As part of configuring the storage servers, you assign IP addresses to connect the storage to the various networks.
Assign IP addresses for each storage server for the following ports:
- Network access port
- Remote management port
- RDMA Network Fabric port
2.2.2 Configure the Storage Server for Your Location
Configure the storage server for use within your company.
- Power on the storage server.
- Change the default passwords.
- Set the time zone on the storage server to make the local time.
- Configure other information as needed, such as NTP and DNS servers.
2.2.3 Configure the Storage Cell
Use the ALTER CELL
command to configure the cell.
In Example 2-1, e-mail notification is configured to
send notification messages to the storage server administrator
according to the specified notification policy. The hyphen (-)
at the end of each line of the ALTER CELL
command allows the command to continue to additional lines
before pressing Enter. As an alternative, you can run the
command using a text file.
Example 2-1 Configuring a New Cell
CellCLI> ALTER CELL -
mailServer='mail_relay.example.com', -
smtpFromAddr='john.doe@example.com', -
smtpToAddr='jane.smith@example.com', -
notificationPolicy='clear', -
notificationMethod='mail,snmp'
2.2.4 Verify Storage Cell Attributes
Use the LIST CELL DETAIL
command to verify the storage cell attributes.
Example 2-2 Viewing Storage Cell Details
This example shows how to view the storage cell attributes.
CellCLI> LIST CELL DETAIL
name: cell01
accessLevelPerm: remoteLoginEnabled
bbuStatus: normal
cellVersion: OSS_19.3.0.0.0_LINUX.X64_190910
cpuCount: 64/64
diagHistoryDays: 7
doNotServiceLEDStatus: off
fanCount: 12/12
fanStatus: normal
flashCacheMode: WriteBack
httpsAccess: ALL
id: 1904XCA016
interconnectCount: 2
interconnect1: ib0
interconnect2: ib1
iormBoost: 0.0
ipaddress1: 192.168.41.245/21
ipaddress2: 192.168.41.246/21
kernelVersion: 4.14.35-1902.5.0.el7uek.x86_64
locatorLEDStatus: off
mailServer: mail_relay.example.com
makeModel: Oracle Corporation ORACLE SERVER X8-2L High Capacity
memoryGB: 188
metricHistoryDays: 7
offloadGroupEvents:
pmemCacheMode: WriteThrough
powerCount: 2/2
powerStatus: normal
ramCacheMaxSize: 0
ramCacheMode: On
ramCacheSize: 0
releaseImageStatus: success
releaseVersion: 19.3.0.0.0.190824
rpmVersion: cell-19.3.0.0.0_LINUX.X64_190824-1.x86_64
releaseTrackingBug: 29344484
smtpFrom: "Exadata Admins"
smtpFromAddr: exa_admins@example.com
smtpToAddr: jane.smith@example.com
snmpSubscriber: host=host1,port=162,community=public,type=asr,asrmPort=16161
status: online
temperatureReading: 29.0
temperatureStatus: normal
upTime: 2 days, 7:05
usbStatus: normal
cellsrvStatus: stopped
msStatus: running
rsStatus: running
2.2.5 Create the Storage Cell Disks
Use the CREATE CELLDISK
command to create the cell disks.
In Example 2-3, the ALL
option creates all the cell disks using the default names.
The cell disks are created with names in the form CD_lunID_cellname
. The lunID and cellname values correspond to the id
attribute of the LUN and name
attribute of the cell. You can specify other disk names if you create single cell disks.
On Oracle Exadata Storage Server with flash disks, the CREATE CELLDISK ALL
command also creates cell disks on the flash disks.
CellCLI> CREATE CELLDISK ALL
CellDisk FD_01_cell01 successfully created
CellDisk FD_02_cell01 successfully created
CellDisk FD_03_cell01 successfully created
CellDisk FD_04_cell01 successfully created
CellDisk FD_05_cell01 successfully created
CellDisk FD_06_cell01 successfully created
CellDisk FD_07_cell01 successfully created
CellDisk FD_08_cell01 successfully created
CellDisk FD_09_cell01 successfully created
CellDisk FD_10_cell01 successfully created
CellDisk FD_11_cell01 successfully created
CellDisk FD_12_cell01 successfully created
CellDisk FD_13_cell01 successfully created
CellDisk FD_14_cell01 successfully created
CellDisk FD_15_cell01 successfully created
Note:
The CREATE CELLDISK
command creates cell disks on flash disks if they do not currently exist. If there are cell disks on the flash disks, then they are not created again.
Example 2-3 Creating Cell Disks
CellCLI> CREATE CELLDISK ALL
CellDisk CD_00_cell01 successfully created
CellDisk CD_01_cell01 successfully created
CellDisk CD_02_cell01 successfully created
...
CellDisk CD_10_cell01 successfully created
CellDisk CD_11_cell01 successfully created
2.2.6 Create the Grid Disks
Use the CREATE GRIDDISK
command to create the grid disks. The size of the disks depends on your requirements.
Example 2-4 Creating Grid Disks
This example shows how to create grid disks. In this example, the ALL HARDDISK PREFIX
option creates one grid disk on each cell disk of the storage cell. The Oracle ASM disk group name is used with PREFIX
to identify which grid disk belongs to the disk group. Prefix values data
and reco
are the names of the Oracle ASM disk groups that are created.
CellCLI> CREATE GRIDDISK ALL HARDDISK PREFIX=data, size=300G
GridDisk data_CD_00_cell01 successfully created
GridDisk data_CD_01_cell01 successfully created
GridDisk data_CD_02_cell01 successfully created
...
GridDisk data_CD_11_cell01 successfully created
CellCLI> CREATE GRIDDISK ALL HARDDISK PREFIX=reco, size=600G
GridDisk reco_CD_00_cell01 successfully created
GridDisk reco_CD_01_cell01 successfully created
GridDisk reco_CD_02_cell01 successfully created
...
GridDisk reco_CD_11_cell01 successfully created
The LIST GRIDDISK
command shows the grid disks that are created.
CellCLI> LIST GRIDDISK
data_CD_00_cell01 active
data_CD_01_cell01 active
data_CD_02_cell01 active
...
data_CD_11_cell01 active
reco_CD_00_cell01 active
reco_CD_01_cell01 active
reco_CD_02_cell01 active
...
reco_CD_11_cell01 active
Example 2-5 Creating a Sparse Grid Disk
In this example, the sparse grid disk uses up to 300 GB from the physical cell disk size, but it exposes 20000 GB virtual space for the Oracle ASM files.
CellCLI> CREATE GRIDDISK ALL HARDDISK PREFIX=sp, size=300G, virtualsize=20000G
GridDisk sp_CD_00_cell01 successfully created
GridDisk sp_CD_01_cell01 successfully created
GridDisk sp_CD_02_cell01 successfully created
...
GridDisk sp_CD_11_cell01 successfully created
2.2.7 Create the PMEM Cache
Persistent memory (PMEM) is only available in Exadata X8M and X9M storage server models.
By default, the CREATE CELL
command creates the cell disks, which
are used when creating the PMEM cache.
- Use the
CREATE PMEMCACHE ALL
to create the PMEM cache on all PMEM cell disks.
You can use the ALTER PMEMCACHE
command to alter the set of cell disks used by PMEM cache, flush dirty blocks from PMEM cache, or cancel a previous flush operation on the specified cell disks to re-enable caching.
2.2.8 Create the Flash Disks and Flash Cache
By default, the CREATE CELL
command creates flash cell disks on all flash disks. The command then creates Exadata Smart Flash Cache on the flash cell disks.
- Use the
CREATE GRIDDISK ALL FLASHDISK PREFIX='FLASH'
andCREATE FLASHCACHE
commands to create the flash disks and flash cache.
To change the size of the Exadata Smart Flash Cache or create flash grid disks it is necessary to remove the flash cache, and then create the flash cache with a different size, or create the flash grid disks.
2.2.9 Configure Oracle Auto Service Request (ASR)
Oracle Auto Service Request (ASR) for Oracle Exadata automatically creates service requests by detecting common hardware faults.
ASR support covers selected components, such as disks and flash cards, in Oracle Exadata Storage Servers and Oracle Exadata Database Servers.
- If you did not elect to configure Oracle Auto Service Request (ASR) when using Oracle Exadata Deployment Assistant (OEDA) to configure your Oracle Exadata Rack, then refer to Oracle Auto Service Request Quick Installation Guide for Oracle Exadata Database Machine for configuration instructions.
2.3 Network Configuration and IP Addresses Recommendations
Follow the recommendation network configuration for Oracle Exadata Storage Server.
-
If your network is not already configured, then set up a fault-tolerant, private network subnet for Oracle Exadata storage servers and database servers with multiple switches to eliminate the switch as a single point of failure. If all the interconnections in the storage network are connected through a single switch, then that switch can be a single point of failure.
If you are using a managed switch, then ensure that the switch VLAN configuration isolates storage server network traffic from all other network traffic.
-
Allocate a block of IP addresses for the storage server general administration and the Integrated Lights Out Manager (ILOM) interfaces. Typically, these interfaces are on the same subnet, and may share the subnet with other hosts. For example, on the 192.168.200.0/24 subnet, you could assign the block of IP addresses between 192.168.200.31 and 192.168.200.100 for storage server general administration and ILOM interfaces. Other hosts sharing the subnet would be allocated IP addresses outside the block. The general administration and ILOM interfaces can be placed on separate subnets, but this is not required.
The following is a sample of four non-overlapping blocks of addresses. One set of addresses should be assigned to the normal Gigabit Ethernet interface for the storage servers. The other blocks can be assigned for the ILOM port for the storage servers. The third set can be used for the database server Gigabit Ethernet ports, and the fourth for the database server ILOM ports.
192.168.200.0/21 (netmask 255.255.248.0) 192.168.208.0/21 (netmask 255.255.248.0) 192.168.216.0/21 (netmask 255.255.248.0) 192.168.224.0/21 (netmask 255.255.248.0)
-
The RDMA Network Fabric network should be a private network for use by the database server hosts and storage servers, and can have private local network addresses. These addresses must also be allocated in non-overlapping blocks.
The following example has 2 blocks of local RDMA Network Fabric addresses. Both the database server RDMA Network Fabric addresses and the storage server RDMA Network Fabric addresses must be in the same subnet to communicate with each other. With bonding, only one subnet is necessary for RDMA Network Fabric addresses.
192.168.50.0/24 (netmask 255.255.255.0) 192.168.51.0/24 (netmask 255.255.255.0)
The preceding subnet blocks do not conflict with each other, and do not conflict with the current allocation to any of the hosts. It is a good practice to allocate the subnet blocks so that they have an identical netmask, which helps to simplify network administration.
-
For Oracle Exadata, the maximum allowed number of hosts in an RDMA Network Fabric network is 4096. Therefore, the network prefix value for the RDMA Network Fabric network must be equal to or greater than 20. This means the netmask must be between 255.255.240.0 and 255.255.255.254 inclusive.
You can determine the network prefix value for a given host IP address and its netmask using the
ipcalc
utility on any Oracle Linux machine, as follows:ipcalc <host ip address> -m <netmask for the host> -p
-
Do not allocate addresses that end in
.0
,.1
, or.255
, or those that would be used as broadcast addresses for the specific netmask that you have selected. For example, avoid addresses such as 192.168.200.0, 192.168.200.1, and 192.168.200.255. -
Ensure the network allows for future expansion. For example, 255.255.255.254 is valid network (prefix /31) but it only allows 1 host.
-
If a domain name system (DNS) is required, then set up your DNS to help reference storage servers and interconnections. Oracle Exadata storage servers do not require DNS. However, if DNS is required, then set up your DNS with the appropriate IP address and host name of the storage servers.
-
The RDMA Network Fabric network should be used for network and storage communication when using Oracle Clusterware. Use the following command to verify the private network for Oracle Clusterware communication is using the RDMA Network Fabric:
oifcfg getif -type cluster_interconnect
-
The Reliable Data Socket (RDS) protocol should be used over the RDMA Network Fabric network for database server to storage server communication and Oracle Real Application Clusters (Oracle RAC) communication. Check the alert log to verify the private network for Oracle RAC is running the RDS protocol over the RDMA Network Fabric network. The following message should be in the alert logs for the instances:
cluster interconnect IPC version: Oracle RDS/IP (generic)
If the RDS protocol is not being used over the RDMA Network Fabric network, then perform the following procedure:
-
Shut down any processes that are using the Oracle software.
-
Change to the
ORACLE_HOME/rdbms/lib
directory. -
Relink the Oracle Database software with the RDS protocol.
make -f ins_rdbms.mk ipc_rds ioracle
Note:
If Oracle ASM uses a separate Oracle home from the database instance, then RDS should be enabled for the binaries in both homes. -
Parent topic: Configuring Oracle Exadata System Software
2.4 Assigning IP Addresses for Oracle Exadata
This topic summarizes the Oracle Exadata network preparation before installing the new storage server.
Each storage server contains the following network ports:
-
One dual-port RDMA Network Fabric card
Oracle Exadata Storage Servers are designed to be connected to two separate RDMA Network Fabric switches for high availability. The dual port card is only for availability. Each port of the RDMA Network Fabric card is capable of transferring the full data bandwidth generated by the storage server. The loss of one network connection does not impact the performance of the storage server.
-
Ethernet ports for normal network access, depending on the platform
-
Oracle Exadata X6-2 and earlier storage servers come with four Ethernet ports. However, only connect one port to a switch, and configure it for network access.
-
-
One Ethernet port is exposed by the Baseboard Management Controller (BMC), or Management Controller (MC) on the storage server. This port is used for Integrated Lights Out Manager (ILOM).
To prepare the storage server for network access, you must perform the following steps:
-
Assign one address to the bonded RDMA Network Fabric port. When you first set up the storage server, you are prompted for the RDMA Network Fabric configuration information. This information is used automatically during the
CREATE CELL
command on first boot, and provides the data path for communication between the storage server and the database servers.To change the IP address after the initial configuration, use the following command, where interface_name is the interface name for the RDMA Network Fabric:
CREATE CELL interconnect1=interface_name, interconnect2=interface_name
For InfiniBand Network Fabric networks, the interface names are
ib0
andib1
. For RoCE Network Fabric networks, the interface names arere0
andre1
. -
Assign an IP address to the storage server for network access.
-
Assign an IP address to the storage server for ILOM.
You can access the remote management functionality with a Java-enabled Web browser at the assigned IP address.
See Also:
Oracle Integrated Lights Out Manager (ILOM) Documentation athttp://www.oracle.com/goto/ilom/docs
Parent topic: Configuring Oracle Exadata System Software
2.5 Configuring Oracle Exadata System Software for Your Location
Customize the software installation for your environment.
- Configuring ILOM With Static IP for Oracle Exadata Storage Servers
Basic lights-out remote management configuration is done during the first boot. - Preparing the Servers
Use the following steps to prepare the database servers and storage servers for use.
Parent topic: Configuring Oracle Exadata System Software
2.5.1 Configuring ILOM With Static IP for Oracle Exadata Storage Servers
Basic lights-out remote management configuration is done during the first boot.
Refer to Preparing the Servers for Integrated Lights Out Manager (ILOM) configuration information.
Caution:
Do not enable the sideband management available in ILOM. Doing so disables all the SNMP agent reporting and monitoring functionality for the server.2.6 Configuring Cells, Cell Disks, and Grid Disks with CellCLI
Configure the cells, cell disks and grid disks for each new Oracle Exadata Storage Server.
During the procedure, you can display help using the HELP
command, and object attributes using the DESCRIBE
command. Example 2-6 shows how to display help and a list of attributes for Oracle Exadata CELL
objects.
Example 2-6 Displaying Help Information
CellCLI> HELP
CellCLI> HELP CREATE CELL
CellCLI> HELP ALTER CELL
CellCLI> DESCRIBE CELL
After you complete the cell configuration, you can perform the following optional steps on the storage cell:
-
Add the storage cell to the Exadata Cell realm.
-
Configure security on the Oracle Exadata Storage Server grid disks, as described in Configuring Security for Oracle Exadata Storage Server Software.
-
Configure an inter-database plan for a cell rather than using the default plans, as described in Managing I/O Resources.
For database server hosts other than those in Oracle Exadata, refer to release notes for enabling them to work with Oracle Exadata Storage Servers.
Related Topics
Parent topic: Configuring Oracle Exadata System Software
2.7 Creating Flash Cache and Flash Grid Disks
Oracle Exadata Storage Servers are equipped with flash disks. These flash disks can be used to create flash grid disks to store frequently accessed data.
Alternatively, all or part of the flash disk space can be dedicated to Exadata Smart Flash Cache. In this case, the most frequently-accessed data is cached in Exadata Smart Flash Cache.
The ALTER CELLDISK ... FLUSH
command must be run before exporting a cell disk to ensure that the data not synchronized with the disk (dirty data) is flushed from flash cache to the grid disks.
Example 2-7 Using the CREATE FLASHCACHE Command
This example shows how to create the Exadata Smart Flash Cache. The entire size of the flash cell disk is not used because the size
attribute has been set.
CellCLI> CREATE FLASHCACHE ALL size=100g
Flash cache cell01_FLASHCACHE successfully created
Example 2-8 Using the CREATE GRIDDISK Command to Create Flash Grid Disks
This example shows how to use the remaining space on the flash cell disks to create flash grid disks.
CellCLI> CREATE GRIDDISK ALL FLASHDISK PREFIX='FLASH'
GridDisk FLASH_FD_00_cell01 successfully created
GridDisk FLASH_FD_01_cell01 successfully created
GridDisk FLASH_FD_02_cell01 successfully created
GridDisk FLASH_FD_03_cell01 successfully created
GridDisk FLASH_FD_04_cell01 successfully created
GridDisk FLASH_FD_05_cell01 successfully created
GridDisk FLASH_FD_06_cell01 successfully created
GridDisk FLASH_FD_07_cell01 successfully created
GridDisk FLASH_FD_08_cell01 successfully created
GridDisk FLASH_FD_09_cell01 successfully created
GridDisk FLASH_FD_10_cell01 successfully created
GridDisk FLASH_FD_11_cell01 successfully created
GridDisk FLASH_FD_12_cell01 successfully created
GridDisk FLASH_FD_13_cell01 successfully created
GridDisk FLASH_FD_14_cell01 successfully created
GridDisk FLASH_FD_15_cell01 successfully created
CellCLI> LIST GRIDDISK
FLASH_FD_00_cell01 active
FLASH_FD_01_cell01 active
FLASH_FD_02_cell01 active
FLASH_FD_03_cell01 active
FLASH_FD_04_cell01 active
FLASH_FD_05_cell01 active
FLASH_FD_06_cell01 active
FLASH_FD_07_cell01 active
FLASH_FD_08_cell01 active
FLASH_FD_09_cell01 active
FLASH_FD_10_cell01 active
FLASH_FD_11_cell01 active
FLASH_FD_12_cell01 active
FLASH_FD_13_cell01 active
FLASH_FD_14_cell01 active
FLASH_FD_15_cell01 active
Example 2-9 Displaying the Exadata Smart Flash Cache Configuration for a Cell
Use the LIST FLASHCACHE
command to display the Exadata Smart Flash Cache configuration for the cell, as shown in this example.
CellCLI> LIST FLASHCACHE DETAIL
name: cell01_FLASHCACHE
cellDisk: FD_00_cell01, FD_01_cell01,FD_02_cell01,
FD_03_cell01, FD_04_cell01, FD_05_cell01,
FD_06_cell01, FD_07_cell01, FD_08_cell01,
FD_09_cell01, FD_10_cell01, FD_11_cell01,
FD_12_cell01, FD_13_cell01, FD_14_cell01,
FD_15_cell01
creationTime: 2009-10-19T17:18:35-07:00
id: b79b3376-7b89-4de8-8051-6eefc442c2fa
size: 365.25G
status: normal
Example 2-10 Dropping Exadata Smart Flash Cache from a Cell
To remove Exadata Smart Flash Cache from a cell, use the DROP FLASHCACHE
command.
CellCLI> DROP FLASHCACHE
Flash cache cell01_FLASHCACHE successfully dropped
Related Topics
Parent topic: Configuring Oracle Exadata System Software
2.8 Setting Up Configuration Files for a Database Server Host
After Oracle Exadata Storage Server is configured, the database server host must be configured with the cellinit.ora
and the cellip.ora
files to use the cell.
The files are located in the
directory.
/etc/oracle/cell/network-config
- The
cellinit.ora
file contains the database IP addresses. - The
cellip.ora
file contains the storage cell IP addresses.
Both files are located on the database server host. These configuration files contain IP addresses, not host names.
The cellinit.ora
file is host-specific, and contains all database IP addresses that connect to the storage network used by Oracle Exadata Storage Servers. This file must exist for each database that connect to Oracle Exadata Storage Servers. The IP addresses are specified in Classless Inter-Domain Routing (CIDR) format. The first IP address must be designated as ipaddress1
, the second IP address as ipaddress2
, and so on.
The following list is an example of the IP address entry for a single database server in Oracle Exadata Database Machine:
-
Oracle Exadata Database Server in Oracle Exadata Database Machine X4-2
ipaddress1=192.168.10.1/22
ipaddress2=192.168.10.2/22
-
Oracle Exadata Database Server in Oracle Exadata Database Machine X3-2 or Oracle Exadata Database Machine X2-2
ipaddress1=192.168.50.23/24
-
Oracle Exadata Database Server in Oracle Exadata Database Machine X3-8 or Oracle Exadata Database Machine X2-8
ipaddress1=192.168.41.111/21
ipaddress2=192.168.41.112/21
ipaddress3=192.168.41.113/21
ipaddress4=192.168.41.114/21
The IP addresses should not be changed after this file is created.
Note:
At boot time on an 8-socket system, each database server generates a cellaffinity.ora
configuration file. The cellaffinity.ora
file resides in the
directory, and must be readable by Oracle Database.
/etc/oracle/cell/network-config
The file contains a mapping between the NUMA node numbers and the IP address of the network interface card closest to each server. Oracle Database uses the file to select the closest network interface card when communicating with Oracle Exadata Storage Servers, thereby optimizing performance.
This file is only generated and used on an 8-socket system. On a 2-socket system, there is no performance to be gained in this manner, and no cellaffinity.ora
file. The file is not intended to be directly edited with a text editor.
To configure a database server host for use with a cell, refer to Maintaining the RDMA Network Fabric for RoCE Network or Maintaining the RDMA Network Fabric for InfiniBand Network.
Parent topic: Configuring Oracle Exadata System Software
2.9 Understanding Automated Cell Maintenance
The Management Server (MS) includes a file deletion policy based on the date.
When there is a shortage of space in the Automatic Diagnostic Repository (ADR) directory, then MS deletes the following files:
- All files in the ADR base directory older than 7 days.
- All files in the
LOG_HOME
directory older than 7 days. - All metric history files older than 7 days.
The retention period of seven days is the default. The retention period can be modified using the metricHistoryDays
and diagHistoryDays
attributes with the ALTER CELL
command. The diagHistoryDays
attribute controls the ADR files, and the metricHistoryDays
attribute controls the other files.
If there is sufficient disk space, then trace files are not purged. This can result in files persisting in the ADR base directory past the time limit specified by diagHistoryDays
.
In addition, the alert.log
file is renamed if it is larger than 10 MB, and versions of the file that are older than 7 days are deleted if their total size is greater than 50 MB.
MS includes a file deletion policy that is triggered when file system utilization is high. Deletion of files in the
(root) directory and the /
directory is triggered when file utilization is 80 percent. Deletion of files in the /var/log/oracle
file system is triggered when file utilization reaches 90 percent, and the alert is cleared when utilization is below 85 percent. An alert is sent before the deletion begins. The alert includes the name of the directory, and space usage for the subdirectories. In particular, the deletion policy is as follows:
/opt/oracle
-
The
file systems, files in the ADR base directory, metric history directory, and/var/log/oracle
LOG_HOME
directory are deleted using a policy based on the file modification time stamp.- Files older than the number of days set by the
metricHistoryDays
attribute value are deleted first - Successive deletions occur for earlier files, down to files with modification time stamps older than or equal to 10 minutes, or until file system utilization is less than 75 percent.
- The renamed
alert.log
files andms-odl
generation files that are over 5 MB, and older than the successively-shorter age intervals are also deleted. - Crash files in the
directory that are more than one day old can be deleted. If the space pressure is not heavy, then the retention time for crash files is the same as for other files. If there are empty directories under/var/log/oracle/crashfiles
, these directories are also deleted./var/log/oracle/crashfiles
- Files older than the number of days set by the
-
For the
file system, the deletion policy is similar to the preceding settings. However, the file threshold is 90 percent, and files are deleted until the file system utilization is less than 85 percent./opt/oracle
-
When file system utilization is full, the files controlled by the
diagHistoryDays
andmetricHistoryDays
attributes are purged using the same purging policy. -
For the
file system, files in the home directories (/
cellmonitor
andcelladmin
),
,/tmp
, and/var/crash
directories that are over 5 MB and older than one day are deleted./var/spool
Every hour, MS deletes eligible alerts from the alert history using the following criteria. Alerts are considered eligible if they are stateless or they are stateful alerts which have been resolved.
-
If there are less than 500 alerts, then alerts older than 100 days are deleted.
-
If there are between 500 and 999 alerts, then the alerts older than 7 days are deleted.
-
If there are 1,000 or more alerts, then all eligible alerts are deleted every minute.
Note:
Any directories or files withSAVE
in the name are not deleted.
Related Topics
Parent topic: Configuring Oracle Exadata System Software