This chapter describes the major steps to configure a small Oracle Exadata System Software grid.
The steps are the same for a larger grid. You determine the number of disks and cells needed in the grid based on your requirements for capacity, performance, and redundancy.
Hardware and software have already been installed for the cells. The procedures in this chapter describe how to configure a storage cell for use with the Oracle Database and Oracle Automatic Storage Management (Oracle ASM) instances.
Note:Modifications to the Oracle Exadata Storage Server hardware or software are not supported. Only the documented network interfaces on the Oracle Exadata Storage Server should be used for all connectivity including management and storage traffic. Additional network interfaces should not be used.
This chapter contains the following topics:
- Understanding Oracle Exadata System Software Release Numbering
- Understanding Oracle Exadata Storage Server Configuration
- Network Configuration and IP Addresses Recommendations
- Assigning IP Addresses for Oracle Exadata Database Machine
- Configuring Oracle Exadata System Software for Your Location
- Configuring Cells, Cell Disks, and Grid Disks with CellCLI
- Creating Flash Cache and Flash Grid Disks
- Setting Up Configuration Files for a Database Server Host
- Understanding Automated Cell Maintenance
2.1 Understanding Oracle Exadata System Software Release Numbering
The Oracle Exadata System Software release number is related to the Oracle Database release number.
The Oracle Exadata System Software release number matches the highest Oracle Grid Infrastructure and Oracle Database version it supports. For example, the highest version Oracle Exadata System Software release 18 supports is Oracle Grid Infrastructure and Oracle Database release 18. The highest version Oracle Exadata System Software release 12.2 supports is Oracle Grid Infrastructure and Oracle Database release 188.8.131.52.
Release 18c and Later Numbering
The Oracle Exadata System Software release that followed release 184.108.40.206.8 was renamed to 18.1.0 and a new numbering scheme for the Oracle Exadata System Software was implemented. Instead of a legacy nomenclature such as 220.127.116.11.5, a three field format consisting of: Year.Update.Revision is used, for example 18.1.0. This new numbering scheme allows you to clearly determine:
- The annual release designation of the software
- The latest software update, which can contain new features
- The latest software revision, which includes security and software fixes
If there are new features or new hardware supported, a new software update will be release during the year, for example, 19.2. To allow you to keep current on just security-related and other software fixes after your feature environment becomes stable, software revisions are made available approximately once a month, for example 19.1.3.
Numbering for Releases Prior to 18c
- The first two digits of the Oracle Exadata System Software release number represent the major Oracle Database release number, such as Oracle Database 12c Release 1 (12.1). Oracle Exadata System Software release 12.1 is compatible with all Oracle Database 12c Release 1 (12.1) releases.
- The third digit usually represents the component-specific Oracle Database release number. This digit usually matches the fourth digit of the complete release number, such as 18.104.22.168.0 for the current release of Oracle Database.
- The last two digits represent the Oracle Exadata System Software release.
2.2 Understanding Oracle Exadata Storage Server Configuration
Oracle Exadata Storage Server ships with all hardware and software pre-installed, however you must configure Oracle Exadata System Software for your environment.
This topic provides a general overview of the configuration tasks. Subsequent topics describe the actual procedures.
2.2.1 Assign IP Addresses for the Storage Servers
As part of configuring the storage servers, you assign IP addresses to connect the storage to the various networks.
Assign IP addresses for each storage server for the following ports:
- Network access port
- Remote management port
- RDMA Network Fabric port
2.2.2 Configure the Storage Server for Your Location
Configure the storage server for use within your company.
- Power on the storage server.
- Change the default passwords.
- Set the time zone on the storage server to make the local time.
- Configure other information as needed, such as NTP and DNS servers.
2.2.3 Configure the Storage Cell
ALTER CELL command to configure the cell.
In Example 2-1, e-mail notification is configured to send e-mail messages to the administrator of the storage cell. The hyphen (-) at the end of each line of the
ALTER CELL command allows the command to continue to additional lines before pressing Enter. As an alternative, you can run the command using a text file.
Example 2-1 Configuring a New Cell
CellCLI> ALTER CELL - smtpServer='my_mail.example.com', - smtpFromAddrfirstname.lastname@example.org', - smtpPwd=email_address_password, - smtpToAddremail@example.com', - notificationPolicy='clear', - notificationMethod='mail,snmp'
2.2.4 Verify Storage Cell Attributes
LIST CELL DETAIL command to verify the storage cell attributes.
Example 2-2 Viewing Storage Cell Details
This example shows how to view the storage cell attributes.
CellCLI> LIST CELL DETAIL name: cell01 accessLevelPerm: remoteLoginEnabled bbuStatus: normal cellVersion: OSS_22.214.171.124.0_LINUX.X64_190910 cpuCount: 64/64 diagHistoryDays: 7 doNotServiceLEDStatus: off fanCount: 12/12 fanStatus: normal flashCacheMode: WriteBack httpsAccess: ALL id: 1904XCA016 interconnectCount: 2 interconnect1: ib0 interconnect2: ib1 iormBoost: 0.0 ipaddress1: 192.168.41.245/21 ipaddress2: 192.168.41.246/21 kernelVersion: 4.14.35-1902.5.0.el7uek.x86_64 locatorLEDStatus: off makeModel: Oracle Corporation ORACLE SERVER X8-2L High Capacity memoryGB: 188 metricHistoryDays: 7 offloadGroupEvents: pmemCacheMode: WriteThrough powerCount: 2/2 powerStatus: normal ramCacheMaxSize: 0 ramCacheMode: On ramCacheSize: 0 releaseImageStatus: success releaseVersion: 126.96.36.199.0.190824 rpmVersion: cell-188.8.131.52.0_LINUX.X64_190824-1.x86_64 releaseTrackingBug: 29344484 smtpFrom: "Exadata Admins" smtpFromAddr: firstname.lastname@example.org smtpServer: my_mail_svr.example.com smtpToAddr: email@example.com snmpSubscriber: host=host1,port=162,community=public,type=asr,asrmPort=16161 status: online temperatureReading: 29.0 temperatureStatus: normal upTime: 2 days, 7:05 usbStatus: normal cellsrvStatus: stopped msStatus: running rsStatus: running
2.2.5 Create the Storage Cell Disks
CREATE CELLDISK command to create the cell disks.
In Example 2-3, the
ALL option creates all the cell disks using the default names.
The cell disks are created with names in the form
CD_lunID_cellname. The lunID and cellname values correspond to the
id attribute of the LUN and
name attribute of the cell. You can specify other disk names if you create single cell disks.
On Oracle Exadata Storage Server with flash disks, the
CREATE CELLDISK ALL command also creates cell disks on the flash disks.
CellCLI> CREATE CELLDISK ALL CellDisk FD_01_cell01 successfully created CellDisk FD_02_cell01 successfully created CellDisk FD_03_cell01 successfully created CellDisk FD_04_cell01 successfully created CellDisk FD_05_cell01 successfully created CellDisk FD_06_cell01 successfully created CellDisk FD_07_cell01 successfully created CellDisk FD_08_cell01 successfully created CellDisk FD_09_cell01 successfully created CellDisk FD_10_cell01 successfully created CellDisk FD_11_cell01 successfully created CellDisk FD_12_cell01 successfully created CellDisk FD_13_cell01 successfully created CellDisk FD_14_cell01 successfully created CellDisk FD_15_cell01 successfully created
CREATE CELLDISK command creates cell disks on flash disks if they do not currently exist. If there are cell disks on the flash disks, then they are not created again.
Example 2-3 Creating Cell Disks
CellCLI> CREATE CELLDISK ALL CellDisk CD_00_cell01 successfully created CellDisk CD_01_cell01 successfully created CellDisk CD_02_cell01 successfully created ... CellDisk CD_10_cell01 successfully created CellDisk CD_11_cell01 successfully created
2.2.6 Create the Grid Disks
CREATE GRIDDISK command to create the grid disks. The size of the disks depends on your requirements.
- Determine the naming format for the grid disks or use the
Grid disk names must be unique across all cells within a single deployment. By following the recommended naming conventions for naming the grid and cell disks you automatically get unique names. If you do not use the default generated name when creating grid disks, then you must ensure that the grid disk name is unique across all storage cells. If the disk name is not unique, then it might not be possible to add the grid disk to an Oracle Automatic Storage Management (Oracle ASM) disk group.
ALL PREFIXoption is used, the generated grid disk names are composed of the grid disk prefix followed by an underscore (
_) and then the cell disk name.
- Use the
CREATE GRIDDISKcommand to create the grid disks.
When creating a grid disk:
You do not have to specify the
sizeattribute. The maximum size possible is automatically chosen if the
sizeattribute is omitted.
Offset determines the position on the disk where the grid disk is allocated. The outermost tracks have lower offset values, and these tracks have greater speed and higher bandwidth. Offset can be explicitly specified to create grid disks that are relatively higher performing than other grid disks. If offset is not specified, then the best (warmest) available offset is chosen automatically in chronological order of grid disk creation. You should first create those grid disks expected to contain the most frequently accessed (hottest) data, and then create the grid disks that contain the relatively colder data.
Sparse grid disks only need to be created when using snapshots. The sparse disk stores the files generated by the snapshot. All standard grid disk operations are supported for sparse grid disks. Sparse grid disks have an additional attribute,
virtualsize. The attribute configures the maximum virtual space the grid disk must provide. The attribute can be resized if the configuration runs out of virtual space on the sparse grid disk and there is physical space available.
The maximum allowed size of a sparse disk is the size of free space on the cell disk. The maximum allowed virtual size is 100 TB.
Oracle Exadata System Software monitors physical space used by sparse grid disks, and generates an alert when most of the space is used. To avoid out-of-space errors, add more physical space to the grid disk using the
ALTER GRIDDISKcommand, or delete some of the Oracle ASM files to free space on the grid disk.
Example 2-4 Creating Grid Disks
This example shows how to create grid disks. In this example, the
ALL HARDDISK PREFIX option creates one grid disk on each cell disk of the storage cell. The Oracle ASM disk group name is used with
PREFIX to identify which grid disk belongs to the disk group. Prefix values
reco are the names of the Oracle ASM disk groups that are created.
CellCLI> CREATE GRIDDISK ALL HARDDISK PREFIX=data, size=300G GridDisk data_CD_00_cell01 successfully created GridDisk data_CD_01_cell01 successfully created GridDisk data_CD_02_cell01 successfully created ... GridDisk data_CD_11_cell01 successfully created CellCLI> CREATE GRIDDISK ALL HARDDISK PREFIX=reco, size=600G GridDisk reco_CD_00_cell01 successfully created GridDisk reco_CD_01_cell01 successfully created GridDisk reco_CD_02_cell01 successfully created ... GridDisk reco_CD_11_cell01 successfully created
LIST GRIDDISK command shows the grid disks that are created.
CellCLI> LIST GRIDDISK data_CD_00_cell01 active data_CD_01_cell01 active data_CD_02_cell01 active ... data_CD_11_cell01 active reco_CD_00_cell01 active reco_CD_01_cell01 active reco_CD_02_cell01 active ... reco_CD_11_cell01 active
Example 2-5 Creating a Sparse Grid Disk
In this example, the sparse grid disk uses up to 300 GB from the physical cell disk size, but it exposes 20000 GB virtual space for the Oracle ASM files.
CellCLI> CREATE GRIDDISK ALL HARDDISK PREFIX=sp, size=300G, virtualsize=20000G GridDisk sp_CD_00_cell01 successfully created GridDisk sp_CD_01_cell01 successfully created GridDisk sp_CD_02_cell01 successfully created ... GridDisk sp_CD_11_cell01 successfully created
2.2.7 Create the PMEM Cache
By default, the
CREATE CELL command creates the cell disks, which are used when creating the PMEM Cache.
- Use the
CREATE PMEMCACHE ALLto create the PMEM Cache on all PMEM cell disks.
You can use the
ALTER PMEMCACHE command to alter the set of cell disks used by PMEM cache, flush dirty blocks from PMEM cache, or cancel a previous flush operation on the specified cell disks to re-enable caching.
2.2.8 Create the Flash Disks and Flash Cache
By default, the
CREATE CELL command creates flash cell disks on all flash disks. The command then creates Exadata Smart Flash Cache on the flash cell disks.
- Use the
CREATE GRIDDISK ALL FLASHDISK PREFIX='FLASH'and
CREATE FLASHCACHEcommands to create the flash disks and flash cache.
To change the size of the Exadata Smart Flash Cache or create flash grid disks it is necessary to remove the flash cache, and then create the flash cache with a different size, or create the flash grid disks.
2.2.9 Configure Oracle Auto Service Request (ASR)
Oracle Auto Service Request (ASR) for Oracle Exadata Database Machine automatically creates service requests by detecting common hardware faults.
ASR support covers selected components, such as disks and flash cards, in Oracle Exadata Storage Servers and Oracle Exadata Database Servers.
- If you did not elect to configure Oracle Auto Service Request (ASR) when using Oracle Exadata Deployment Assistant (OEDA) to configure your Oracle Exadata Rack, then refer to Oracle Auto Service Request Quick Installation Guide for Oracle Exadata Database Machine for configuration instructions.
2.3 Network Configuration and IP Addresses Recommendations
Follow the recommendation network configuration for Oracle Exadata Storage Server.
If your network is not already configured, then set up a fault-tolerant, private network subnet for Oracle Exadata Database Machine storage servers and database servers with multiple switches to eliminate the switch as a single point of failure. If all the interconnections in the storage network are connected through a single switch, then that switch can be a single point of failure.
If you are using a managed switch, then ensure that the switch VLAN configuration isolates storage server network traffic from all other network traffic.
Allocate a block of IP addresses for the storage server general administration and the Integrated Lights Out Manager (ILOM) interfaces. Typically, these interfaces are on the same subnet, and may share the subnet with other hosts. For example, on the 192.168.200.0/24 subnet, you could assign the block of IP addresses between 192.168.200.31 and 192.168.200.100 for storage server general administration and ILOM interfaces. Other hosts sharing the subnet would be allocated IP addresses outside the block. The general administration and ILOM interfaces can be placed on separate subnets, but this is not required.
The following is a sample of four non-overlapping blocks of addresses. One set of addresses should be assigned to the normal Gigabit Ethernet interface for the storage servers. The other blocks can be assigned for the ILOM port for the storage servers. The third set can be used for the database server Gigabit Ethernet ports, and the fourth for the database server ILOM ports.
192.168.200.0/21 (netmask 255.255.248.0) 192.168.208.0/21 (netmask 255.255.248.0) 192.168.216.0/21 (netmask 255.255.248.0) 192.168.224.0/21 (netmask 255.255.248.0)
The RDMA Network Fabric network should be a private network for use by the database server hosts and storage servers, and can have private local network addresses. These addresses must also be allocated in non-overlapping blocks.
The following example has 2 blocks of local RDMA Network Fabric addresses. Both the database server RDMA Network Fabric addresses and the storage server RDMA Network Fabric addresses must be in the same subnet to communicate with each other. With bonding, only one subnet is necessary for RDMA Network Fabric addresses.
192.168.50.0/24 (netmask 255.255.255.0) 192.168.51.0/24 (netmask 255.255.255.0)
The preceding subnet blocks do not conflict with each other, and do not conflict with the current allocation to any of the hosts. It is a good practice to allocate the subnet blocks so that they have an identical netmask, which helps to simplify network administration.
For Oracle Exadata Database Machine, the maximum allowed number of hosts in an RDMA Network Fabric network is 4096. Therefore, the network prefix value for the RDMA Network Fabric network must be equal to or greater than 20. This means the netmask must be between 255.255.240.0 and 255.255.255.254 inclusive.
You can determine the network prefix value for a given host IP address and its netmask using the
ipcalcutility on any Oracle Linux machine, as follows:
ipcalc <host ip address> -m <netmask for the host> -p
Do not allocate addresses that end in
.255, or those that would be used as broadcast addresses for the specific netmask that you have selected. For example, avoid addresses such as 192.168.200.0, 192.168.200.1, and 192.168.200.255.
Ensure the network allows for future expansion. For example, 255.255.255.254 is valid network (prefix /31) but it only allows 1 host.
If a domain name system (DNS) is required, then set up your DNS to help reference storage servers and interconnections. Oracle Exadata Database Machine storage servers do not require DNS. However, if DNS is required, then set up your DNS with the appropriate IP address and host name of the storage servers.
The RDMA Network Fabric network should be used for network and storage communication when using Oracle Clusterware. Use the following command to verify the private network for Oracle Clusterware communication is using the RDMA Network Fabric:
oifcfg getif -type cluster_interconnect
The Reliable Data Socket (RDS) protocol should be used over the RDMA Network Fabric network for database server to storage server communication and Oracle Real Application Clusters (Oracle RAC) communication. Check the alert log to verify the private network for Oracle RAC is running the RDS protocol over the RDMA Network Fabric network. The following message should be in the alert logs for the instances:
cluster interconnect IPC version: Oracle RDS/IP (generic)
If the RDS protocol is not being used over the RDMA Network Fabric network, then perform the following procedure:
Shut down any processes that are using the Oracle software.
Change to the
Relink the Oracle Database software with the RDS protocol.
make -f ins_rdbms.mk ipc_rds ioracle
Note:If Oracle ASM uses a separate Oracle home from the database instance, then RDS should be enabled for the binaries in both homes.
2.4 Assigning IP Addresses for Oracle Exadata Database Machine
This topic summarizes the Oracle Exadata Database Machine network preparation before installing the new storage server.
Each storage server contains the following network ports:
One dual-port RDMA Network Fabric card
Oracle Exadata Storage Servers are designed to be connected to two separate RDMA Network Fabric switches for high availability. The dual port card is only for availability. Each port of the RDMA Network Fabric card is capable of transferring the full data bandwidth generated by the storage server. The loss of one network connection does not impact the performance of the storage server.
Ethernet ports for normal network access, depending on the platform
Oracle Exadata Database Machine X6-2 and earlier storage servers come with four Ethernet ports. However, only connect one port to a switch, and configure it for network access.
One Ethernet port is exposed by the Baseboard Management Controller (BMC), or Management Controller (MC) on the storage server. This port is used for Integrated Lights Out Manager (ILOM).
To prepare the storage server for network access, you must perform the following steps:
Assign one address to the bonded RDMA Network Fabric port. When you first set up the storage server, you are prompted for the RDMA Network Fabric configuration information. This information is used automatically during the
CREATE CELLcommand on first boot, and provides the data path for communication between the storage server and the database servers.
To change the IP address after the initial configuration, use the following command, where interface_name is the interface name for the RDMA Network Fabric:
CREATE CELL interconnect1=interface_name, interconnect2=interface_name
For InfiniBand Transport Layer systems based on an InfiniBand Network Layer networks, the interface names are
ib1. For InfiniBand Transport Layer systems based on a RoCE Network Layer networks, the interface names are
Assign an IP address to the storage server for network access.
Assign an IP address to the storage server for ILOM.
You can access the remote management functionality with a Java-enabled Web browser at the assigned IP address.
See Also:Oracle Integrated Lights Out Manager (ILOM) Documentation at
2.5 Configuring Oracle Exadata System Software for Your Location
This section describes the storage cell configuration, and contains the following topics:
2.5.1 Configuring ILOM With Static IP for Oracle Exadata Storage Servers
Basic lights-out remote management configuration is done during the first boot.
Refer to Preparing the Servers for Integrated Lights Out Manager (ILOM) configuration information.
Caution:Do not enable the sideband management available in ILOM. Doing so disables all the SNMP agent reporting and monitoring functionality for the server.
2.5.2 Preparing the Servers
Use the following steps to prepare the database servers and storage servers for use.
- Configure Integrated Lights Out Manager (ILOM).
- Power on the storage server to boot its operating system.
- Respond to the prompts to configure the system, after the storage server boots.
Press y to confirm, or n to retry or terminate when you are prompted for a yes or no response during the configuration steps. The yes or no prompt shows the default choice in brackets. If you do not enter a response, then the default choice is selected when you press Enter.
- Check the network connections.
The list of all discovered interfaces displays, and you are prompted to check the cables for those interfaces that do not have an active network cable connection. You can retry the configuration steps after connecting the cables, or ignore the unconnected interfaces. Only connected interfaces can be configured.
- Enter the DNS server IP addresses, if needed.A DNS is not needed for a standalone, private storage environment.
- Enter the time preference.
- Choose the local time region number from the displayed list of available time regions.
- Choose the location within the time region number from the displayed list of locations.
- Enter the Network Time Protocol (NTP) servers.
These servers are required to maintain the time on the system correctly, and are synchronized to your local time source.
- Enter the Ethernet addresses, RDMA Network Fabric IP addresses and interfaces.
A list of all Ethernet and RDMA Network Fabric interfaces that have an active network connection is displayed with the name of the interface on the extreme left. The InfiniBand Transport Layer systems based on an InfiniBand Network Layer interface is named
BONDIB0and uses bonding between physical InfiniBand Transport Layer systems based on an InfiniBand Network Layer interfaces
ib1. Bonding provides the ability to transparently fail over from
ib0if connectivity is lost to
For each Ethernet and RDMA Network Fabric interface you configure, you are prompted for the following that apply to the interface:
- IP address
- Gateway IP address
- Fully-qualified domain name
If you choose not to configure each interface in the list, then that interface is not configured, and it does not start at system startup. After the configuration of the IP addresses, the system completes the startup process. At the end of the process, additional packages are installed.
- Select the canonical, fully-qualified domain name from the list.
This host name is the primary public host name for the server, and is part of the
If more than one Ethernet interface was configured with the gateway, then select the line number for the default gateway. This gateway is in the
/etc/sysconfig/networkfile, and is used as the default gateway.
- Provide the following information when prompted for it:
- ILOM full, domain-qualified host name
- ILOM IP address
- ILOM netmask
- ILOM gateway
- ILOM NTP servers
- ILOM DNS server
- (Oracle Exadata Storage Server only) Change the initial passwords for the
cellmonitorusers to more secure passwords.
Note:If you do not have the password for the
rootuser, then contact Oracle Support Services.
To change the passwords, log in as the
rootuser, then use the
passwdcommand to change the passwords, such as the following:
# passwd # passwd celladmin # passwd cellmonitor
To verify the changed passwords, log in and out of the server using each of the user names.
cellmonitoruser is set up with privileges that enable you to only view Oracle Exadata Storage Server objects. You must be logged in as the
celladminuser to perform administrative tasks.
- Check for any failures reported in the
/var/log/cellos/vldrun.first_boot.logfile after the first boot configuration.
For each failed validation, perform the following procedure:
- Look for the
The file exists only if the validation process has identified some corrective action. Follow the suggestions in the file to correct the cause of the failure.
- If the
SuggestedRemedyfile does not exist, then examine the log file for the failed validation in
/var/log/cellos/validationsto track down the cause, and correct it as needed.
- Look for the
- (Oracle Exadata Storage Server only) Use the following commands to verify acceptable performance levels:
# cellcli -e "alter cell shutdown services cellsrv" # cellcli -e "calibrate"
2.6 Configuring Cells, Cell Disks, and Grid Disks with CellCLI
After you complete the tasks described in "Preparing the Servers", you must configure the cells, cell disks and grid disks for each new storage server.
During the procedure, you can display help using the
HELP command, and object attributes using the
DESCRIBE command. Example 2-6 shows how to display help and a list of attributes for Oracle Exadata Database Machine
Use the following procedure to create the cells, cell disks, and grid disks for Oracle Exadata Storage Servers:
- Log in as the
- Use the
cellclicommand to start Cell Control Command-Line Interface (CellCLI) to connect to the storage cell.
The required cell services, Restart Server (RS) and Management Server (MS), should be running after the binary has been installed. If not, then an error message displays when using the CellCLI utility. If an error message displays, then run the following commands to start Oracle Exadata System Software RS and MS services:
CellCLI> ALTER CELL STARTUP SERVICES RS CellCLI> ALTER CELL STARTUP SERVICES MS
- Configure the cell using the CellCLI
ALTER CELLcommand. During first boot, the cell is created, and the flash cell disks and flash cache defined automatically.
CellCLI> ALTER CELL name=cell_name, - smtpServer='my_mail.example.com', - smtpFromAddrfirstname.lastname@example.org', - smtpPwd=email_address_password, - smtpToAddremail@example.com', - notificationPolicy='clear', - notificationMethod='mail,snmp'
- Use the following command to check the storage cell attributes, and to verify the current configuration:
CellCLI> LIST CELL DETAIL
- Create the cell disks, using the
CREATE CELLDISKcommand. In most cases, you can use the default cell disk names and LUN IDs. Use the following command to create cell disks and LUN IDs with the default values.
CellCLI> CREATE CELLDISK ALL
- Create grid disks on each cell disk of the storage cell, using the
- Exit the CellCLI utility after setting up the storage cell using the following command:
- Repeat the configuration process for each new storage cell. This procedure must be done on each new cell before configuring the Exadata Cell realm, the database server hosts, or the database and Oracle ASM instances.
Example 2-6 Displaying Help Information
CellCLI> HELP CellCLI> HELP CREATE CELL CellCLI> HELP ALTER CELL CellCLI> DESCRIBE CELL
After you complete the cell configuration, you can perform the following optional steps on the storage cell:
Add the storage cell to the Exadata Cell realm.
Configure security on the Oracle Exadata Storage Server grid disks, as described in https://www.oracle.com/pls/topic/lookup?ctx=en/engineered-systems/exadata-database-machine/sagug&id=DBMSQ-GUID-5025E63E-2D49-4DD8-8FD4-421DE4FC63AB.
Configure an inter-database plan for a cell rather than using the default plans, as described in Managing I/O Resources.
For database server hosts other than those in Oracle Exadata Database Machine, refer to release notes for enabling them to work with Oracle Exadata Storage Servers.
2.7 Creating Flash Cache and Flash Grid Disks
Oracle Exadata Storage Servers are equipped with flash disks. These flash disks can be used to create flash grid disks to store frequently accessed data.
Alternatively, all or part of the flash disk space can be dedicated to Exadata Smart Flash Cache. In this case, the most frequently-accessed data is cached in Exadata Smart Flash Cache.
ALTER CELLDISK ... FLUSH command must be run before exporting a cell disk to ensure that the data not synchronized with the disk (dirty data) is flushed from flash cache to the grid disks.
- By default, the
CREATE CELLcommand creates flash cell disks on all flash disks, and then creates Exadata Smart Flash Cache on the flash cell disks.
To change the size of the Exadata Smart Flash Cache or create flash grid disks it is necessary to remove the flash cache, and then create the flash cache with a different size, or create the flash grid disks.
- To change the amount of flash cache allocated, use the
flashcacheattribute with the
CREATE CELLcommand.If the
flashcacheattribute is not specified, then all available flash space is allocated for flash cache.
- To explicitly create the Exadata Smart Flash Cache, use the
CREATE FLASHCACHEcommand. Use the
celldiskattribute to specify which flash cell disks contain the Exadata Smart Flash Cache.
Alternatively, you can specify
celldiskto use all flash cell disks. Use the
sizeattribute to specify the total size of the flash cache to allocate. The allocation is evenly distributed across all flash cell disks.
Example 2-7 Using the CREATE FLASHCACHE Command
This example shows how to create the Exadata Smart Flash Cache. The entire size of the flash cell disk is not used because the
size attribute has been set.
CellCLI> CREATE FLASHCACHE ALL size=100g Flash cache cell01_FLASHCACHE successfully created
Example 2-8 Using the CREATE GRIDDISK Command to Create Flash Grid Disks
This example shows how to use the remaining space on the flash cell disks to create flash grid disks.
CellCLI> CREATE GRIDDISK ALL FLASHDISK PREFIX='FLASH' GridDisk FLASH_FD_00_cell01 successfully created GridDisk FLASH_FD_01_cell01 successfully created GridDisk FLASH_FD_02_cell01 successfully created GridDisk FLASH_FD_03_cell01 successfully created GridDisk FLASH_FD_04_cell01 successfully created GridDisk FLASH_FD_05_cell01 successfully created GridDisk FLASH_FD_06_cell01 successfully created GridDisk FLASH_FD_07_cell01 successfully created GridDisk FLASH_FD_08_cell01 successfully created GridDisk FLASH_FD_09_cell01 successfully created GridDisk FLASH_FD_10_cell01 successfully created GridDisk FLASH_FD_11_cell01 successfully created GridDisk FLASH_FD_12_cell01 successfully created GridDisk FLASH_FD_13_cell01 successfully created GridDisk FLASH_FD_14_cell01 successfully created GridDisk FLASH_FD_15_cell01 successfully created CellCLI> LIST GRIDDISK FLASH_FD_00_cell01 active FLASH_FD_01_cell01 active FLASH_FD_02_cell01 active FLASH_FD_03_cell01 active FLASH_FD_04_cell01 active FLASH_FD_05_cell01 active FLASH_FD_06_cell01 active FLASH_FD_07_cell01 active FLASH_FD_08_cell01 active FLASH_FD_09_cell01 active FLASH_FD_10_cell01 active FLASH_FD_11_cell01 active FLASH_FD_12_cell01 active FLASH_FD_13_cell01 active FLASH_FD_14_cell01 active FLASH_FD_15_cell01 active
Example 2-9 Displaying the Exadata Smart Flash Cache Configuration for a Cell
LIST FLASHCACHE command to display the Exadata Smart Flash Cache configuration for the cell, as shown in this example.
CellCLI> LIST FLASHCACHE DETAIL name: cell01_FLASHCACHE cellDisk: FD_00_cell01, FD_01_cell01,FD_02_cell01, FD_03_cell01, FD_04_cell01, FD_05_cell01, FD_06_cell01, FD_07_cell01, FD_08_cell01, FD_09_cell01, FD_10_cell01, FD_11_cell01, FD_12_cell01, FD_13_cell01, FD_14_cell01, FD_15_cell01 creationTime: 2009-10-19T17:18:35-07:00 id: b79b3376-7b89-4de8-8051-6eefc442c2fa size: 365.25G status: normal
Example 2-10 Dropping Exadata Smart Flash Cache from a Cell
To remove Exadata Smart Flash Cache from a cell, use the
DROP FLASHCACHE command.
CellCLI> DROP FLASHCACHE Flash cache cell01_FLASHCACHE successfully dropped
2.8 Setting Up Configuration Files for a Database Server Host
After Oracle Exadata Storage Server is configured, the database server host must be configured with the
cellinit.ora and the
cellip.ora files to use the cell. The files are located in the
cellinit.orafile contains the database IP addresses.
cellip.orafile contains the storage cell IP addresses.
Both files are located on the database server host. These configuration files contain IP addresses, not host names.
cellinit.ora file is host-specific, and contains all database IP addresses that connect to the storage network used by Oracle Exadata Storage Servers. This file must exist for each database that connect to Oracle Exadata Storage Servers. The IP addresses are specified in Classless Inter-Domain Routing (CIDR) format. The first IP address must be designated as
ipaddress1, the second IP address as
ipaddress2, and so on.
The following list is an example of the IP address entry for a single database server in Oracle Exadata Database Machine:
Oracle Exadata Database Server in Oracle Exadata Database Machine X4-2
Oracle Exadata Database Server in Oracle Exadata Database Machine X3-2 or Oracle Exadata Database Machine X2-2
Oracle Exadata Database Server in Oracle Exadata Database Machine X3-8 or Oracle Exadata Database Machine X2-8
The IP addresses should not be changed after this file is created.
At boot time on an 8-socket system, each database server generates a
cellaffinity.ora configuration file. The
cellaffinity.ora file resides in the
directory, and must be readable by Oracle Database.
The file contains a mapping between the NUMA node numbers and the IP address of the network interface card closest to each server. Oracle Database uses the file to select the closest network interface card when communicating with Oracle Exadata Storage Servers, thereby optimizing performance.
This file is only generated and used on an 8-socket system. On a 2-socket system, there is no performance to be gained in this manner, and no
cellaffinity.ora file. The file is not intended to be directly edited with a text editor.
To configure a database server host for use with a cell, refer to Oracle Exadata Database Machine Maintenance Guide.
2.9 Understanding Automated Cell Maintenance
The Management Server (MS) includes a file deletion policy based on the date.
When there is a shortage of space in the Automatic Diagnostic Repository (ADR) directory , then MS deletes the following files:
All files in the ADR base directory older than 7 days.
All files in the
LOG_HOMEdirectory older than 7 days.
All metric history files older than 7 days.
The retention period of seven days is the default. The retention period can be modified using the
diagHistoryDays attributes with the
ALTER CELL command. The
diagHistoryDays attribute controls the ADR files, and the
metricHistoryDays attribute controls the other files.
If there is sufficient disk space, then trace files are not purged. This can result in files persisting in the ADR base directory past the time limit specified by
In addition, the
alert.log file is renamed if it is larger than 10 MB, and versions of the file that are older than 7 days are deleted if their total size is greater than 50 MB.
MS includes a file deletion policy that is triggered when file system utilization is high. Deletion of files in the
(root) directory and the
directory is triggered when file utilization is 80 percent. Deletion of files in the
file system is triggered when file utilization reaches 90 percent, and the alert is cleared when utilization is below 85 percent. An alert is sent before the deletion begins. The alert includes the name of the directory, and space usage for the subdirectories. In particular, the deletion policy is as follows:
file systems, files in the ADR base directory, metric history directory, and
LOG_HOMEdirectory are deleted using a policy based on the file modification time stamp.
- Files older than the number of days set by the
metricHistoryDaysattribute value are deleted first
- Successive deletions occur for earlier files, down to files with modification time stamps older than or equal to 10 minutes, or until file system utilization is less than 75 percent.
- The renamed
ms-odlgeneration files that are over 5 MB, and older than the successively-shorter age intervals are also deleted.
- Crash files in the
directory over 5 MB and older than one day are deleted.
- Files older than the number of days set by the
file system, the deletion policy is similar to the preceding settings. However, the file threshold is 90 percent, and files are deleted until the file system utilization is less than 85 percent.
When file system utilization is full, the files controlled by the
metricHistoryDaysattributes are purged using the same purging policy.
file system, files in the home directories (
directories that are over 5 MB and older than one day are deleted.
Every hour, MS deletes eligible alerts from the alert history using the following criteria. Alerts are considered eligible if they are stateless or they are stateful alerts which have been resolved.
If there are less than 500 alerts, then alerts older than 100 days are deleted.
If there are between 500 and 999 alerts, then the alerts older than 7 days are deleted.
If there are 1,000 or more alerts, then all eligible alerts are deleted every minute.
Note:Any directories or files with
SAVEin the name are not deleted.