2 Configuring Oracle Exadata System Software

This chapter describes the major steps to configure a small Oracle Exadata System Software grid.

The steps are the same for a larger grid. You determine the number of disks and cells needed in the grid based on your requirements for capacity, performance, and redundancy.

Hardware and software have already been installed for the cells. The procedures in this chapter describe how to configure a storage cell for use with the Oracle Database and Oracle Automatic Storage Management (Oracle ASM) instances.


Modifications to the Oracle Oracle Exadata Storage Server hardware or software are not supported. Only the documented network interfaces on the Oracle Exadata Storage Server should be used for all connectivity including management and storage traffic. Additional network interfaces should not be used.

This chapter contains the following topics:

2.1 Understanding Oracle Exadata System Software Release Numbering

The Oracle Exadata System Software release number is related to the Oracle Database release number.

  • The first two digits of the Oracle Exadata System Software release number represent the major Oracle Database release number, such as Oracle Database 12c Release 1 (12.1). Oracle Exadata System Software release 12.1 is compatible with all Oracle Database 12c Release 1 (12.1) releases.

  • The third digit usually represents the component-specific Oracle Database release number. This digit usually matches the fourth digit of the complete release number, such as for the current release of Oracle Database.

  • The last two digits represent the Oracle Exadata System Software release.

2.2 Understanding Oracle Exadata Storage Server Configuration

Oracle Exadata Storage Server ships with all hardware and software pre-installed, however you must configure Oracle Exadata System Software for your environment.

2.2.1 Assign IP Addresses for the Storage Cells

Assign IP addresses for the storage cell for the following ports:

  • Network access port

  • Remote management port

  • InfiniBand port

2.2.2 Configure the Storage Cell for Your Location

Power on the storage cell and configure it for your location, such as setting the time zone and passwords.

2.2.3 Configure the Storage Cell

Use the ALTER CELL command to configure the cell.

In Example 2-1, e-mail notification is configured to send e-mail messages to the administrator of the storage cell. The hyphen (-) at the end of each line of the ALTER CELL command allows the command to continue to additional lines before pressing Enter. As an alternative, you can run the command using a text file.

Example 2-1 Configuring a New Cell

CellCLI> ALTER CELL                                                       -
         smtpServer='my_mail.example.com',                                -
         smtpFromAddr='john.doe@example.com',                             -
         smtpPwd=email_address_password,                                  -
         smtpToAddr='jane.smith@example.com',                             -
         notificationPolicy='clear',                                      -

2.2.4 Verify Storage Cell Attributes

Use the LIST CELL DETAIL command to verify the storage cell attributes.

Example 2-2 Viewing Storage Cell Details

This example shows how to view the storage cell attributes.

         name:                   cell01
         accessLevelPerm:        remoteLoginEnabled
         bbuStatus:              normal
         cellVersion:            OSS_18.
         cpuCount:               24/24
         diagHistoryDays:        7
         fanCount:               12/12
         fanStatus:              normal
         flashCacheMode:         WriteBack
         httpsAccess:            ALL
         id:                     1031FMM062
         interconnectCount:      2
         interconnect1:          bondib0
         iormBoost:              0.0
         kernelVersion:          4.1.12-94.8.4.el6uek.x86_64
         locatorLEDStatus:       off
         makeModel:              Oracle Corporation SUN FIRE X4270 M2 SERVER High Performance
         memoryGB:               24
         metricHistoryDays:      7
         powerCount:             2/2
         powerStatus:            normal
         ramCacheMaxSize:        0
         ramCacheMode:           On
         ramCacheSize:           0
         releaseImageStatus:     success
         rpmVersion:             cell-
         releaseTrackingBug:     27347059
         smtpFrom:               "John Doe"
         smtpFromAddr:           john.doe@example.com
         smtpServer:             my_mail.example.com
         smtpToAddr:             jane.smith@example.com
         snmpSubscriber:         host=host1,port=162,community=public,type=asr,asrmPort=16161
         status:                 online
         temperatureReading:     24.0
         temperatureStatus:      normal
         upTime:                 2 days, 13:16
         usbStatus:              normal
         cellsrvStatus:          running
         msStatus:               running
         rsStatus:               running

2.2.5 Create the Storage Cell Disks

Use the CREATE CELLDISK command to create the cell disks.

In Example 2-3, the ALL option creates all the cell disks using the default names.

The cell disks are created with names in the form CD_lunID_cellname. The lunID and cellname values correspond to the id attribute of the LUN and name attribute of the cell. You can specify other disk names if you create single cell disks.

On Oracle Exadata Storage Server with flash disks, the CREATE CELLDISK ALL command also creates cell disks on the flash disks.

CellDisk FD_01_cell01 successfully created
CellDisk FD_02_cell01 successfully created
CellDisk FD_03_cell01 successfully created
CellDisk FD_04_cell01 successfully created
CellDisk FD_05_cell01 successfully created
CellDisk FD_06_cell01 successfully created
CellDisk FD_07_cell01 successfully created
CellDisk FD_08_cell01 successfully created
CellDisk FD_09_cell01 successfully created
CellDisk FD_10_cell01 successfully created
CellDisk FD_11_cell01 successfully created
CellDisk FD_12_cell01 successfully created
CellDisk FD_13_cell01 successfully created
CellDisk FD_14_cell01 successfully created
CellDisk FD_15_cell01 successfully created


The CREATE CELLDISK command creates cell disks on flash disks if they do not currently exist. If there are cell disks on the flash disks, then they are not created again.

Example 2-3 Creating Cell Disks

CellDisk CD_00_cell01 successfully created
CellDisk CD_01_cell01 successfully created
CellDisk CD_02_cell01 successfully created
CellDisk CD_10_cell01 successfully created
CellDisk CD_11_cell01 successfully created

2.2.6 Create the Grid Disks

Grid disk names must be unique across all cells within a single deployment. By following the recommended naming conventions for naming the grid and cell disks you automatically get unique names. If you do not use the default generated name when creating grid disks, then you must ensure that the grid disk name is unique across all storage cells. If the disk name is not unique, then it might not be possible to add the grid disk to an Oracle Automatic Storage Management (Oracle ASM) disk group.

When the ALL PREFIX option is used, the generated grid disk names are composed of the grid disk prefix followed by an underscore (_) and then the cell disk name.

Use the CREATE GRIDDISK command to create the grid disks. The size of the disks depends on your requirements.

When creating a grid disk:

  • You do not have to specify the size attribute. The maximum size possible is automatically chosen if the size attribute is omitted.

  • Offset determines the position on the disk where the grid disk is allocated. The outermost tracks have lower offset values, and these tracks have greater speed and higher bandwidth. Offset can be explicitly specified to create grid disks that are relatively higher performing than other grid disks. If offset is not specified, then the best (warmest) available offset is chosen automatically in chronological order of grid disk creation. You should first create those grid disks expected to contain the most frequently accessed (hottest) data, and then create the grid disks that contain the relatively colder data.

    When using the normal_redundancy interleave option, the creation of a grid disk with an offset creates half of the grid disk at the start of the outermost tracks at the offset specified, and the other half of the grid disk at the same offset from the start of the innermost tracks.

    With high_redundancy interleave option, the grid disk is divided into three equal-size sections, with each section going into the outermost third tracks, next outer third tracks and the innermost third tracks.

  • Sparse grid disks only need to be created when using snapshots. The sparse disk stores the files generated by the snapshot. All standard grid disk operations are supported for sparse grid disks. Sparse grid disks have an additional attribute, virtualsize. The attribute configures the maximum virtual space the grid disk must provide. The attribute can be resized if the configuration runs out of virtual space on the sparse grid disk and there is physical space available.

    The maximum allowed size of a sparse disk is the size of free space on the cell disk. The maximum allowed virtual size is 100 TB.

    Oracle Exadata System Software monitors physical space used by sparse grid disks, and generates an alert when most of the space is used. To avoid out-of-space errors, add more physical space to the grid disk using the ALTER GRIDDISK command, or delete some of the Oracle ASM files to free space on the grid disk.

Example 2-4 Creating Grid Disks

This example shows how to create grid disks. In this example, the ALL HARDDISK PREFIX option creates one grid disk on each cell disk of the storage cell. The Oracle ASM disk group name is used with PREFIX to identify which grid disk belongs to the disk group. Prefix values data and reco are the names of the Oracle ASM disk groups that are created.

GridDisk data_CD_00_cell01 successfully created
GridDisk data_CD_01_cell01 successfully created
GridDisk data_CD_02_cell01 successfully created
GridDisk data_CD_11_cell01 successfully created

GridDisk reco_CD_00_cell01 successfully created
GridDisk reco_CD_01_cell01 successfully created
GridDisk reco_CD_02_cell01 successfully created
GridDisk reco_CD_11_cell01 successfully created

The LIST GRIDDISK command shows the grid disks that are created.

         data_CD_00_cell01       active
         data_CD_01_cell01       active
         data_CD_02_cell01       active
         data_CD_11_cell01       active

         reco_CD_00_cell01      active
         reco_CD_01_cell01      active
         reco_CD_02_cell01      active
         reco_CD_11_cell01      active

Example 2-5 Creating a Sparse Grid Disk

In this example, the sparse grid disk uses up to 300 GB from the physical cell disk size, but it exposes 20000 GB virtual space for the Oracle ASM files.

CellCLI> CREATE GRIDDISK ALL HARDDISK PREFIX=sp, size=300G, virtualsize=20000G
GridDisk sp_CD_00_cell01 successfully created
GridDisk sp_CD_01_cell01 successfully created
GridDisk sp_CD_02_cell01 successfully created
GridDisk sp_CD_11_cell01 successfully created

2.2.7 Create the Flash Disks and Flash Cache

By default, the CREATE CELL command creates flash cell disks on all flash disks. The command then creates Exadata Smart Flash Cache on the flash cell disks.

  • Use the CREATE GRIDDISK ALL FLASHDISK PREFIX='FLASH' and CREATE FLASHCACHE commands to create the flash disks and flash cache.

To change the size of the Exadata Smart Flash Cache or create flash grid disks it is necessary to remove the flash cache, and then create the flash cache with a different size, or create the flash grid disks.

2.2.8 Configure Oracle Auto Service Request (ASR)

Oracle Auto Service Request (ASR) for Oracle Exadata Database Machine automatically creates service requests by detecting common hardware faults.

ASR support covers selected components, such as disks and flash cards, in Oracle Exadata Storage Servers and Oracle Exadata Database Servers.

  • If you did not elect to configure Oracle Auto Service Request (ASR) when using Oracle Exadata Deployment Assistant (OEDA) to configure your Oracle Exadata Rack, then refer to Oracle Auto Service Request Quick Installation Guide for Oracle Exadata Database Machine for configuration instructions.

2.3 Network Configuration and IP Addresses Recommendations

The following are recommendations for the network configuration and IP addresses.

  • If your network is not already configured, then set up a fault-tolerant, private network subnet for Oracle Exadata Storage Servers and database server hosts with multiple switches to eliminate the switch as a single point of failure. If all the interconnections in the Exadata Cell network are connected through a single switch, then that switch can be a single point of failure.

    If you are using a managed switch, then ensure that the switch VLAN configuration isolates Exadata Cell network traffic from all other network traffic.

  • Allocate a block of IP addresses for the Oracle Exadata Storage Server general administration and the Lights Out (LO) remote management interfaces. Typically, these interfaces are on the same subnet, and may share the subnet with other hosts. For example, on the subnet, you could assign the block of IP addresses between and for the Oracle Exadata Storage Server general administration and LO remote management interfaces. Other hosts sharing the subnet would be allocated IP addresses outside the block. The general administration and LO remote management interfaces can be placed on separate subnets, but this is not required.

    Do not allocate addresses that end in .0, .1, or .255, or those that would be used as broadcast addresses for the specific netmask that you have selected. For example, avoid addresses such as,, and

    The following is a sample of four non-overlapping blocks of addresses. One set of addresses should be assigned to the normal Gigabit Ethernet interface/port for cells. The other may be assigned for the LO remote management port for the cells. The third set can be used for the database server Gigabit Ethernet port, and the fourth for the database server LO remote management port. (netmask (netmask (netmask (netmask

    The InfiniBand network should be a private network for use by the database server hosts and Oracle Exadata Storage Servers, and can have private local network addresses. These addresses must also be allocated in non-overlapping blocks.

    The following example has 2 blocks of local InfiniBand addresses. Both the database server InfiniBand and the storage server InfiniBand must be on the same subnet in order to communicate with each other. With bonding, only one subnet is necessary for InfiniBand addresses. (netmask (netmask

    The preceding subnet blocks do not conflict with each other, and do not conflict with the current allocation to any of the hosts. It is a good practice to allocate the subnet blocks so that they have the identical netmask, which helps to simplify network administration.


    For Oracle Exadata System Software, the maximum allowed number of hosts in an InfiniBand network is 4096. Therefore, the network prefix value for the InfiniBand network must be equal to or greater than 20. This means the netmask must be between and both inclusive.

    You can determine the network prefix value for a given host IP address and its netmask using the ipcalc utility on any Linux machine, as follows:

    ipcalc <host ip address such as> -m 
           <netmask for the host ip address such as>  -p

    Ensure the network allows for future expansion. For example, is valid network (prefix /31) but it only allows 1 host.

  • If a domain name system (DNS) is required, then set up your DNS to help reference cells and interconnections. Oracle Exadata Storage Servers do not require DNS. However, if DNS is required, then set up your DNS with the appropriate IP address and host name of Oracle Exadata Storage Server.

  • The InfiniBand network should be used for network and storage communication when using Oracle Clusterware. Use the following command to verify the private network for Oracle Clusterware communication is using InfiniBand:

    oifcfg getif -type cluster_interconnect
  • The Reliable Data Socket (RDS) protocol should be used over the InfiniBand network for database server to cell communication and Oracle Real Application Clusters (Oracle RAC) communication. Check the alert log to verify the private network for Oracle RAC is running the RDS protocol over the InfiniBand network. The following message should be in the log:

    cluster interconnect IPC version: Oracle RDS/IP (generic)

    If the RDS protocol is not being used over the InfiniBand network, then perform the following procedure:

    1. Shut down any processes that are using the Oracle binary.

    2. Change to the ORACLE_HOME/rdbms/lib directory.

    3. Run the following command:

      make -f ins_rdbms.mk ipc_rds ioracle


    If a separate Oracle home is used for Oracle ASMand the database, then RDS should be enabled for both of them.

2.4 Assigning IP Addresses for Oracle Exadata System Software

This topic summarizes the Oracle Exadata System Software network preparation before installing the new storage cell.

Each storage cell contains the following network ports:

  • One dual-port InfiniBand card

    Oracle Exadata Storage Servers are designed to be connected to two separate InfiniBand switches for high availability. The dual port card is only for availability. Each port of the InfiniBand card is capable of transferring the full data bandwidth generated by the storage cell. The loss of one network connection does not impact the performance of the storage cell.

  • Gigabit Ethernet ports for normal network access, depending on the platform

    • Oracle Exadata Storage Server comes with four Gigabit Ethernet ports. However, only connect one port to a switch, and configure it for network access.

  • One Gigabit Ethernet port is exposed by the Baseboard Management Controller (BMC), or Management Controller (MC) on Oracle Exadata Storage Server. This port is used for Lights Out (LO) remote management.

    • Oracle Exadata Storage Server uses Integrated Lights Out Manager (ILOM) for remote management.


    You can install valid Secure Socket Layer (SSL) certificates if you plan to use the Web interface to access the ILOM.

To prepare the Exadata Cell network, you must perform the following procedure:

  1. Assign one address to the bonded InfiniBand port. When you first set up the cell, you are prompted for the BONDIB0 configuration information. This information is used automatically during the CREATE CELL command on first boot, and provides the data path for communication between the cell and the database servers.


    To change the BONDIB0 address after initial configuration, use the following command:

    CREATE CELL interconnect1=BONDIB0

    Oracle recommends that this InfiniBand network be a private network.

  2. Assign an IP address to the cell for network access.

  3. Assign an IP address to the cell for LO remote management.

    You can access the remote management functionality with a Java-enabled Web browser at the assigned IP address.

See Also:

Oracle Integrated Lights Out Manager (ILOM) Documentation at http://www.oracle.com/goto/ilom/docs

2.5 Configuring Oracle Exadata System Software for Your Location

This section describes the storage cell configuration, and contains the following topics:

2.5.1 Configuring LO Remote Management With Static IP for Oracle Exadata Storage Servers

Basic lights-out (LO) remote management configuration is done during the first boot.

Refer to "Preparing the Servers" for LO remote management configuration information.


Do not enable the sideband management available in ILOM. Doing so disables all the SNMP agent reporting and monitoring functionality for the server.

2.5.2 Preparing the Servers

This procedure describes how to prepare the database servers and Oracle Exadata Storage Servers for use.

  1. Configure lights-out remote management.
  2. Power on the storage cell to boot its operating system.
  3. Respond to the prompts to configure the system, after the storage cell boots.

    Press y to confirm, or n to retry or terminate when you are prompted for a yes or no response during the configuration steps. The yes or no prompt shows the default choice in brackets. If you do not enter a response, then the default choice is selected when you press Enter.

  4. Check the network connections.

    The list of all discovered interfaces displays, and you are prompted to check the cables for those interfaces that do not have an active network cable connection. You can retry the configuration steps after connecting the cables, or ignore the unconnected interfaces. Only connected interfaces can be configured.

  5. Enter the DNS server IP addresses, if needed.
    A DNS is not needed for a standalone, private storage environment.
  6. Enter the time preference.
    • Choose the local time region number from the displayed list of available time regions.
    • Choose the location within the time region number from the displayed list of locations.
  7. Enter the Network Time Protocol (NTP) servers.

    These servers are required to maintain the time on the system correctly, and are synchronized to your local time source.

  8. Enter the Ethernet addresses, InfiniBand IP addresses and interfaces.

    A list of all Ethernet and InfiniBand interfaces that have an active network connection is displayed with the name of the interface on the extreme left. The InfiniBand interface is named BONDIB0 and uses bonding between physical InfiniBand interfaces ib0 and ib1. Bonding provides the ability to transparently fail over from ib0 to ib1 or from ib1 to ib0 if connectivity is lost to ib0 or ib1, respectively.

    For each Ethernet and InfiniBand interface you configure, you are prompted for the following that apply to the interface:

    • IP address

    • Netmask

    • Gateway IP address

    • Fully-qualified domain name

    If you choose not to configure each interface in the list, then that interface is not configured, and it does not start at system startup. After the configuration of the IP addresses, the system completes the startup process. At the end of the process, additional packages are installed, and then the installation of Oracle Exadata Storage Server is complete.

  9. Select the canonical, fully-qualified domain name from the list.

    This host name is the primary public host name for the server, and is part of the /etc/sysconfig/network file.

    If more than one Ethernet interface was configured with the gateway, then select the line number for the default gateway. This gateway is in the /etc/sysconfig/network file, and is used as the default gateway.

  10. Provide the following information when prompted for it:
    • ILOM full, domain-qualified host name

    • ILOM IP address

    • ILOM netmask

    • ILOM gateway

    • ILOM NTP servers

    • (Optional) ILOM DNS server

  11. (Oracle Exadata Storage Server only) Change the initial passwords for the root, celladmin, and cellmonitor users to more secure passwords.


    If you do not have the password for the root user, then contact Oracle Support Services.

    To change the passwords, log in as the root user, then use the passwd command to change the passwords, such as the following:

    # passwd
    # passwd celladmin
    # passwd cellmonitor

    To verify the changed passwords, log in and out using each of the user names.


    The cellmonitor user is set up with privileges that enable you to view Exadata Cell objects only. You must be logged in as the celladmin user to perform administrative tasks.

  12. Check for any failures reported in the /var/log/cellos/vldrun.first_boot.log file after the first boot configuration.

    For each failed validation, perform the following procedure:

    1. Look for the /var/log/cellos/validations/failed_validation_name.SuggestedRemedy file.

      The file exists only if the validation process has identified some corrective action. Follow the suggestions in the file to correct the cause of the failure.

    2. If the SuggestedRemedy file does not exist, then examine the log file for the failed validation in /var/log/cellos/validations to track down the cause, and correct it as needed.
  13. (Oracle Exadata Storage Server only) Use the following commands to verify acceptable performance levels:
    cellcli -e "alter cell shutdown services cellsrv"
    cellcli -e "calibrate"

2.6 Configuring Cells, Cell Disks, and Grid Disks with CellCLI

After you complete the tasks described in "Preparing the Servers", you must configure the cells, cell disks and grid disks for each new storage server.

During the procedure, you can display help using the HELP command, and object attributes using the DESCRIBE command. Example 2-6 shows how to display help and a list of attributes for Exadata Cell objects.

Use the following procedure to create the cells, cell disks, and grid disks for Oracle Exadata Storage Servers:

  1. Log in as the celladmin user.
  2. Use the cellcli command to start Cell Control Command-Line Interface (CellCLI) to connect to the storage cell.

    The required cell services, Restart Server (RS) and Management Server (MS), should be running after the binary has been installed. If not, then an error message displays when using the CellCLI utility. If an error message displays, then run the following commands to start Oracle Exadata System Software RS and MS services:

  3. Configure the cell using the CellCLI ALTER CELL command. During first boot, the cell is created, and the flash cell disks and flash cache defined automatically.
    CellCLI> ALTER CELL name=cell_name,                                      -
               smtpServer='my_mail.example.com',                             -
               smtpFromAddr='john.doe@example.com',                          -
               smtpPwd=email_address_password,                           -
               smtpToAddr='jane.smith@example.com',                          -
               notificationPolicy='clear',                                   -
  4. Use the following command to check the storage cell attributes, and to verify the current configuration:
  5. Create the cell disks, using the CREATE CELLDISK command. In most cases, you can use the default cell disk names and LUN IDs. Use the following command to create cell disks and LUN IDs with the default values.
  6. Create grid disks on each cell disk of the storage cell, using the CREATE GRIDDISK command.
  7. Exit the CellCLI utility after setting up the storage cell using the following command:
    CellCLI> EXIT
  8. Repeat the configuration process for each new storage cell. This procedure must be done on each new cell before configuring the Exadata Cell realm, the database server hosts, or the database and Oracle ASM instances.

Example 2-6 Displaying Help Information


After you complete the cell configuration, you can perform the following optional steps on the storage cell:

For database server hosts other than those in Oracle Exadata Database Machine, refer to release notes for enabling them to work with Oracle Exadata Storage Servers.

2.7 Creating Flash Cache and Flash Grid Disks

Oracle Exadata Storage Servers are equipped with flash disks. These flash disks can be used to create flash grid disks to store frequently accessed data.

Alternatively, all or part of the flash disk space can be dedicated to Exadata Smart Flash Cache. In this case, the most frequently-accessed data is cached in Exadata Smart Flash Cache.

The ALTER CELLDISK ... FLUSH command must be run before exporting a cell disk to ensure that the data not synchronized with the disk (dirty data) is flushed from flash cache to the grid disks.

  • By default, the CREATE CELL command creates flash cell disks on all flash disks, and then creates Exadata Smart Flash Cache on the flash cell disks.

    To change the size of the Exadata Smart Flash Cache or create flash grid disks it is necessary to remove the flash cache, and then create the flash cache with a different size, or create the flash grid disks.

  • To change the amount of flash cache allocated, use the flashcache attribute with the CREATE CELL command.
    If the flashcache attribute is not specified, then all available flash space is allocated for flash cache.
  • To explicitly create the Exadata Smart Flash Cache, use the CREATE FLASHCACHE command. Use the celldisk attribute to specify which flash cell disks contain the Exadata Smart Flash Cache.

    Alternatively, you can specify ALL instead of celldisk to use all flash cell disks. Use the size attribute to specify the total size of the flash cache to allocate. The allocation is evenly distributed across all flash cell disks.

Example 2-7 Using the CREATE FLASHCACHE Command

This example shows how to create the Exadata Smart Flash Cache. The entire size of the flash cell disk is not used because the size attribute has been set.

Flash cache cell01_FLASHCACHE successfully created

Example 2-8 Using the CREATE GRIDDISK Command to Create Flash Grid Disks

This example shows how to use the remaining space on the flash cell disks to create flash grid disks.

GridDisk FLASH_FD_00_cell01 successfully created
GridDisk FLASH_FD_01_cell01 successfully created
GridDisk FLASH_FD_02_cell01 successfully created
GridDisk FLASH_FD_03_cell01 successfully created
GridDisk FLASH_FD_04_cell01 successfully created
GridDisk FLASH_FD_05_cell01 successfully created
GridDisk FLASH_FD_06_cell01 successfully created
GridDisk FLASH_FD_07_cell01 successfully created
GridDisk FLASH_FD_08_cell01 successfully created
GridDisk FLASH_FD_09_cell01 successfully created
GridDisk FLASH_FD_10_cell01 successfully created
GridDisk FLASH_FD_11_cell01 successfully created
GridDisk FLASH_FD_12_cell01 successfully created
GridDisk FLASH_FD_13_cell01 successfully created
GridDisk FLASH_FD_14_cell01 successfully created
GridDisk FLASH_FD_15_cell01 successfully created
         FLASH_FD_00_cell01      active
         FLASH_FD_01_cell01      active
         FLASH_FD_02_cell01      active
         FLASH_FD_03_cell01      active
         FLASH_FD_04_cell01      active
         FLASH_FD_05_cell01      active
         FLASH_FD_06_cell01      active
         FLASH_FD_07_cell01      active
         FLASH_FD_08_cell01      active
         FLASH_FD_09_cell01      active
         FLASH_FD_10_cell01      active
         FLASH_FD_11_cell01      active
         FLASH_FD_12_cell01      active
         FLASH_FD_13_cell01      active
         FLASH_FD_14_cell01      active
         FLASH_FD_15_cell01      active

Example 2-9 Displaying the Exadata Smart Flash Cache Configuration for a Cell

Use the LIST FLASHCACHE command to display the Exadata Smart Flash Cache configuration for the cell, as shown in this example.

         name:                   cell01_FLASHCACHE
         cellDisk:               FD_00_cell01, FD_01_cell01,FD_02_cell01, 
                                 FD_03_cell01, FD_04_cell01, FD_05_cell01, 
                                 FD_06_cell01, FD_07_cell01, FD_08_cell01, 
                                 FD_09_cell01, FD_10_cell01, FD_11_cell01, 
                                 FD_12_cell01, FD_13_cell01, FD_14_cell01, 
         creationTime:           2009-10-19T17:18:35-07:00
         id:                     b79b3376-7b89-4de8-8051-6eefc442c2fa
         size:                   365.25G
         status:                 normal

Example 2-10 Dropping Exadata Smart Flash Cache from a Cell

To remove Exadata Smart Flash Cache from a cell, use the DROP FLASHCACHE command.

Flash cache cell01_FLASHCACHE successfully dropped

2.8 Setting Up Configuration Files for a Database Server Host

After Oracle Exadata Storage Server is configured, the database server host must be configured with the cellinit.ora and the cellip.ora files to use the cell. The files are located in the /etc/oracle/cell/network-config directory.

  • The cellinit.ora file contains the database IP addresses.

  • The cellip.ora file contains the storage cell IP addresses.

Both files are located on the database server host. These configuration files contain IP addresses, not host names.

The cellinit.ora file is host-specific, and contains all database IP addresses that connect to the storage network used by Oracle Exadata Storage Servers. This file must exist for each database that connect to Oracle Exadata Storage Servers. The IP addresses are specified in Classless Inter-Domain Routing (CIDR) format. The first IP address must be designated as ipaddress1, the second IP address as ipaddress2, and so on.

The following list is an example of the IP address entry for a single database server in Oracle Exadata Database Machine:

  • Oracle Exadata Database Server in Oracle Exadata Database Machine X4-2
    • ipaddress1=

    • ipaddress2=

  • Oracle Exadata Database Server in Oracle Exadata Database Machine X3-2 or Oracle Exadata Database Machine X2-2
    • ipaddress1=

  • Oracle Exadata Database Server in Oracle Exadata Database Machine X3-8 or Oracle Exadata Database Machine X2-8
    • ipaddress1=

    • ipaddress2=

    • ipaddress3=

    • ipaddress4=

The IP addresses should not be changed after this file is created.


At boot time on an 8-socket system, each database server generates a cellaffinity.ora configuration file. The cellaffinity.ora file resides in the /etc/oracle/cell/network-config directory, and must be readable by Oracle Database.

The file contains a mapping between the NUMA node numbers and the IP address of the network interface card closest to each server. Oracle Database uses the file to select the closest network interface card when communicating with Oracle Exadata Storage Servers, thereby optimizing performance.

This file is only generated and used on an 8-socket system. On a 2-socket system, there is no performance to be gained in this manner, and no cellaffinity.ora file. The file is not intended to be directly edited with a text editor.

To configure a database server host for use with a cell, refer to Oracle Exadata Database Machine Maintenance Guide.

2.9 Understanding Automated Cell Maintenance

The Management Server (MS) includes a file deletion policy based on the date.

When there is a shortage of space in the Automatic Diagnostic Repository (ADR) directory , then MS deletes the following files:

  • All files in the ADR base directory older than 7 days.

  • All files in the LOG_HOME directory older than 7 days.

  • All metric history files older than 7 days.

The retention period of seven days is the default. The retention period can be modified using the metricHistoryDays and diagHistoryDays attributes with the ALTER CELL command. The diagHistoryDays attribute controls the ADR files, and the metricHistoryDays attribute controls the other files.

If there is sufficient disk space, then trace files are not purged. This can result in files persisting in the ADR base directory past the time limit specified by diagHistoryDays.

In addition, the alert.log file is renamed if it is larger than 10 MB, and versions of the file that are older than 7 days are deleted if their total size is greater than 50 MB.

MS includes a file deletion policy that is triggered when file system utilization is high. Deletion of files in the / (root) directory and the /var/log/oracle directory is triggered when file utilization is 80 percent. Deletion of files in the /opt/oracle file system is triggered when file utilization reaches 90 percent, and the alert is cleared when utilization is below 85 percent. An alert is sent before the deletion begins. The alert includes the name of the directory, and space usage for the subdirectories. In particular, the deletion policy is as follows:

  • The /var/log/oracle file systems, files in the ADR base directory, metric history directory, and LOG_HOME directory are deleted using a policy based on the file modification time stamp.

    • Files older than the number of days set by the metricHistoryDays attribute value are deleted first
    • Successive deletions occur for earlier files, down to files with modification time stamps older than or equal to 10 minutes, or until file system utilization is less than 75 percent.
    • The renamed alert.log files and ms-odl generation files that are over 5 MB, and older than the successively-shorter age intervals are also deleted.
    • Crash files in the /var/log/oracle/crashfiles directory over 5 MB and older than one day are deleted.
  • For the /opt/oracle file system, the deletion policy is similar to the preceding settings. However, the file threshold is 90 percent, and files are deleted until the file system utilization is less than 85 percent.

  • When file system utilization is full, the files controlled by the diagHistoryDays and metricHistoryDays attributes are purged using the same purging policy.

  • For the / file system, files in the home directories (cellmonitor and celladmin), /tmp, /var/crash, and /var/spool directories that are over 5 MB and older than one day are deleted.

Every hour, MS deletes eligible alerts from the alert history using the following criteria. Alerts are considered eligible if they are stateless or they are stateful alerts which have been resolved.

  • If there are less than 500 alerts, then alerts older than 100 days are deleted.

  • If there are between 500 and 999 alerts, then the alerts older than 7 days are deleted.

  • If there are 1,000 or more alerts, then all eligible alerts are deleted every minute.


Any directories or files with SAVE in the name are not deleted.

Related Topics