3 Configuring the New Hardware
This section contains the following tasks needed to configure the new hardware:
Note:
The new and existing racks must be at the same patch level for Oracle Exadata Database Servers and Oracle Exadata Storage Servers, including the operating system. Refer to Reviewing Release and Patch Levels for additional information.- Changing the Interface Names
- Setting Up New Servers
New servers need to be configured when extending Oracle Exadata Elastic Configurations. - Setting up a New Rack
A new rack is configured at the factory. However, it is necessary to set up the network and configuration files for use with the existing rack. - Setting User Equivalence
User equivalence can be configured to include all servers once the servers are online. - Starting the Cluster
- Adding Grid Disks to Oracle ASM Disk Groups
- Adding Servers to a Cluster
This procedure describes how to add servers to a cluster. - Configuring Cell Alerts for New Oracle Exadata Storage Servers
Cell alerts need to be configured for the new Oracle Exadata Storage Servers. - Adding Oracle Database Software to the New Servers
- Adding Database Instance to the New Servers
- Returning the Rack to Service
3.1 Changing the Interface Names
For systems with RoCE Network Fabric (X8M and later), BONDETH0
is used for
the bonded Ethernet client network.
For later systems with InfiniBand Network Fabric (X3 to
X8), BONDIB0
and BONDETH0
are typically used for the
bonded RDMA Network Fabric and the bonded Ethernet client
network, respectively.
For Oracle Exadata Database Machine X2-2 and
earlier systems (with X4170 and X4275 servers), BOND0
and
BOND1
are the names for the bonded RDMA Network Fabric and bonded Ethernet client networks, respectively.
If you are adding new servers to an existing Oracle Exadata, ensure the database servers use the same names for bonded configuration. You can either change the new database servers to match the existing server interface names, or change the existing server interface names and Oracle Cluster Registry (OCR) configuration to match the new servers.
Do the following after changing the interface names:
-
Edit the entries in
/etc/sysctl.conf
file on the database servers so that the entries for the RDMA Network Fabric match. The following is an example of the file entries before editing. One set of entries must be changed to match the other set.Found in X2 node net.ipv4.neigh.bondib0.locktime = 0 net.ipv4.conf.bondib0.arp_ignore = 1 net.ipv4.conf.bondib0.arp_accept = 1 net.ipv4.neigh.bondib0.base_reachable_time_ms = 10000 net.ipv4.neigh.bondib0.delay_first_probe_time = 1 Found in V2 node net.ipv4.conf.bond0.arp_accept=1 net.ipv4.neigh.bond0.base_reachable_time_ms=10000 net.ipv4.neigh.bond0.delay_first_probe_time=1
-
Save the changes to the
sysctl.conf
file. -
Use the
oifcfg
utility to change the OCR configuration, if the new names differ from what is currently in OCR. The interface names for Oracle Exadata Storage Servers do not have to be changed. -
Continue configuring the new hardware, as follows:
-
If the hardware is new servers, then go to Setting Up New Servers to configure the servers.
-
If the hardware is a new rack, then go to Setting up a New Rack to configure the rack.
-
See Also:
Oracle Exadata Database Machine Maintenance Guide for information about changing the RDMA Network Fabric informationParent topic: Configuring the New Hardware
3.2 Setting Up New Servers
New servers need to be configured when extending Oracle Exadata Elastic Configurations.
The new servers do not have any configuration information, and you cannot use Oracle Enterprise Manager Cloud Control to configure them. The servers are configured using the Oracle Exadata Deployment Assistant (OEDA) or manually.
- Configuring Servers Using OEDA
When adding servers to an Oracle Exadata, you can use OEDA. - Configuring New Servers Manually
When adding servers to an Oracle Exadata, you can configure the servers manually instead of using OEDA.
Parent topic: Configuring the New Hardware
3.2.1 Configuring Servers Using OEDA
When adding servers to an Oracle Exadata, you can use OEDA.
Note:
In order to configure the servers with Oracle Exadata Deployment Assistant (OEDA), the new server information must be entered in OEDA, and configuration files generated.3.3 Setting up a New Rack
A new rack is configured at the factory. However, it is necessary to set up the network and configuration files for use with the existing rack.
-
Configure the new rack and servers as described in Configuring Oracle Exadata Database Machine in the Oracle Exadata Database Machine Installation and Configuration Guide.
Complete the appropriate tasks to configure the rack and its components, but do not complete the task for installing the Exadata configuration information and software on the servers. This task will be completed later in this procedure.
-
Verify the time is the same on the new servers as on the existing servers. This check is performed for storage servers and database servers.
-
Ensure the NTP settings are the same on the new servers as on the existing servers. This check is performed for storage servers and database servers.
-
Configure HugePages on the new servers to match the existing servers.
-
Ensure the RDMA Network Fabric and bonded client Ethernet interface names on the new database servers match the existing database servers.
-
Configure the rack as described in Loading the Configuration Information and Installing the Software in Oracle Exadata Database Machine Installation and Configuration Guide. You can use either the Oracle Exadata Deployment Assistant (OEDA) or Oracle Enterprise Manager Cloud Control to configure the rack.
Note:
- Only run OEDA up to the
Create Grid Disks
step, then configure storage servers as described in Configuring Cells, Cell Disks, and Grid Disks with CellCLI in Oracle Exadata System Software User's Guide. - When adding servers with 3 TB High Capacity (HC) disks to existing servers with 2TB disks, it is recommended to follow the procedure in My Oracle Support Doc ID 1476336.1 to properly define the grid disks and disk groups. At this point of setting up the rack, it is only necessary to define the grid disks. The disk groups are created after the cluster has been extended on to the new nodes.
- If the existing storage servers are Extreme Flash (EF) and you are adding High Capacity (HC) storage servers, or if the existing storage servers are HC and you are adding EF storage servers, then you must place the new disks in new disk groups. You cannot mix EF and HC disks within the same disk group.
- Only run OEDA up to the
-
Go to Setting User Equivalence to continue the hardware configuration.
3.4 Setting User Equivalence
User equivalence can be configured to include all servers once the servers are online.
This procedure must be done before running the post-cabling utilities.
-
Log in to each new server manually using SSH to verify that each server can accept log ins and that the passwords are correct.
-
Modify the
dbs_group
andcell_group
files on all servers to include all servers.-
Create the new directories on the first existing database server.
# mkdir /root/new_group_files # mkdir /root/old_group_files # mkdir /root/group_files
-
Copy the group files for the new servers to the
/root/new_group_files
directory. -
Copy the group files for the existing servers to the
/root/old_group_files
directory. -
Copy the group files for the existing servers to the
/root/group_files
directory. -
Update the group files to include the existing and new servers.
cat /root/new_group_files/dbs_group >> /root/group_files/dbs_group cat /root/new_group_files/cell_group >> /root/group_files/cell_group cat /root/new_group_files/all_group >> /root/group_files/all_group cat /root/new_group_files/dbs_priv_group >> /root/group_files/dbs_priv_group cat /root/new_group_files/cell_priv_group >> /root/group_files/cell_priv_group cat /root/new_group_files/all_priv_group >> /root/group_files/all_priv_group
-
Make the updated group files the default group files. The updated group files contain the existing and new servers.
cp /root/group_files/* /root cp /root/group_files/* /opt/oracle.SupportTools/onecommand
-
Put a copy of the updated group files in the
root
user,oracle
user, and Oracle Grid Infrastructure user home directories, and ensure that the files are owned by the respective users.
-
-
Modify the
/etc/hosts
file on the existing and new database server to include the existing RDMA Network Fabric IP addresses for the database servers and storage servers. The existing and newall_priv_group
files can be used for this step.Note:
Do not copy the/etc/hosts
file from one server to the other servers. Edit the file on each server. -
Run the
setssh-Linux.sh
script as theroot
user on one of the existing database servers to configure user equivalence for all servers using the following command. Oracle recommends using the first database server.# /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h \ /path_to_file/all_group -n N
In the preceding command, path_to_file is the directory path for the
all_group
file containing the names for the existing and new servers.Note:
For Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) systems, use the
setssh.sh
command to configure user equivalence.The command line options for the
setssh.sh
command differ from thesetssh-Linux.sh
command. Runsetssh.sh
without parameters to see the proper syntax. -
Add the known hosts using RDMA Network Fabric. This step requires that all database servers are accessible by way of their InfiniBand interfaces.
# /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h \ /path_to_file/all_priv_group -n N -p password
-
Verify equivalence is configured.
# dcli -g all_group -l root date # dcli -g all_priv_group -l root date
-
Run the
setssh-Linux.sh
script as theoracle
user on one of the existing database servers to configure user equivalence for all servers using the following command. Oracle recommends using the first database server. If there are separate owners for the Oracle Grid Infrastructure software, then run a similar command for each owner.$ /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h \ /path_to_file/dbs_group -n N
In the preceding command, path_to_file is the directory path for the
dbs_group
file. The file contains the names for the existing and new servers.Note:
-
For Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers) systems, use the
setssh.sh
command to configure user equivalence. -
It may be necessary to temporarily change the permissions on the
setssh-Linux.sh
file to 755 for this step. Change the permissions back to the original settings after completing this step.
-
-
Add the known hosts using RDMA Network Fabric. This step requires that all database servers are accessible by way of their InfiniBand interfaces.
$ /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h \ /root/group_files/dbs_priv_group -n N
-
Verify equivalence is configured.
$ dcli -g dbs_group -l oracle date $ dcli -g dbs_priv_group -l oracle date
If there is a separate Oracle Grid Infrastructure user, then also run the preceding commands for that user, substituting the
grid
user name for theoracle
user.
Parent topic: Configuring the New Hardware
3.5 Starting the Cluster
The following procedure describes how to start the cluster if it was stopped earlier for cabling an additional rack.
Note:
-
Oracle recommends you start one server, and let it come up fully before starting Oracle Clusterware on the rest of the servers.
-
It is not necessary to stop a cluster when extending Oracle Exadata Database Machine Half Rack to a Full Rack, or a Quarter Rack to a Half Rack or Full Rack.
-
Log in as the
root
user on the original cluster. -
Start one server of the cluster.
# Grid_home/grid/bin/crsctl start cluster
-
Check the status of the server.
Grid_home/grid/bin/crsctl stat res -t
Run the preceding command until it shows that the first server has started.
-
Start the other servers in the cluster.
# Grid_home/grid/bin/crsctl start cluster -all
-
Check the status of the servers.
Grid_home/grid/bin/crsctl stat res -t
It may take several minutes for all servers to start and join the cluster.
Parent topic: Configuring the New Hardware
3.6 Adding Grid Disks to Oracle ASM Disk Groups
Grid disks can be added to Oracle ASM disk groups before or after the new servers are added to the cluster. The advantage of adding the grid disks before adding the new servers is that the rebalance operation can start earlier. The advantage of adding the grid disks after adding the new servers is that the rebalance operation can be done on the new servers so less load is placed on the existing servers.
The following procedure describes how to add grid disk to existing Oracle ASM disk groups.
Note:
-
It is assumed in the following examples that the newly-installed storage servers have the same grid disk configuration as the existing storage servers, and that the additional grid disks will be added to existing disk groups.
The information gathered about the current configuration should be used when setting up the grid disks.
-
If the existing storage servers have High Performance (HP) disks and you are adding storage servers with High Capacity (HC) disks or the existing storage servers have HC disks and you are adding storage servers HP disks, then you must place the new disks in new disk groups. It is not permitted to mix HP and HC disks within the same disk group.
-
Ensure the new storage servers are running the same version of software as storage servers already in use. Run the following command on the first database server:
dcli -g dbs_group -l root "imageinfo -ver"
Note:
If the Oracle Exadata System Software on the storage servers does not match, then upgrade or patch the software to be at the same level. This could be patching the existing servers or new servers. Refer to Reviewing Release and Patch Levels for additional information. -
Modify the
/etc/oracle/cell/network-config/cellip.ora
file on all database servers to have a complete list of all storage servers. Thecellip.ora
file should be identical on all database servers.When adding Oracle Exadata Storage Server X4-2L servers, the
cellip.ora
file contains two IP addresses listed for each cell. Copy each line completely to include the two IP addresses, and merge the addresses in thecellip.ora
file of the existing cluster.-
From any database server, make a backup copy of the
cellip.ora
file.cp /etc/oracle/cell/network-config cp cellip.ora cellip.ora.orig cp cellip.ora cellip.ora-bak
- Edit the
cellip.ora-bak
file and add the IP addresses for the new storage servers. -
Copy the edited file to the
cellip.ora
file on all database nodes usingdcli
. Use a file nameddbnodes
that contains the names of every database server in the cluster, with each database name on a separate line. Run the following command from the directory that contains thecellip.ora-bak
file./usr/local/bin/dcli -g dbnodes -l root -f cellip.ora-bak -d /etc/oracle/cell/network-config/cellip.ora
The following is an example of the
cellip.ora
file after expanding Oracle Exadata Database Machine X3-2 Half Rack to Full Rack using Oracle Exadata Storage Server X4-2L servers:cell="192.168.10.9" cell="192.168.10.10" cell="192.168.10.11" cell="192.168.10.12" cell="192.168.10.13" cell="192.168.10.14" cell="192.168.10.15" cell="192.168.10.17;192.168.10.18" cell="192.168.10.19;192.168.10.20" cell="192.168.10.21;192.168.10.22" cell="192.168.10.23;192.168.10.24" cell="192.168.10.25;192.168.10.26" cell="192.168.10.27;192.168.10.28" cell="192.168.10.29;192.168.10.30"
In the preceding example, lines 1 through 7 are for the original servers, and lines 8 through 14 are for the new servers. Oracle Exadata Storage Server X4-2L servers have two IP addresses each.
-
-
Ensure the updated
cellip.ora
file is on all database servers. The updated file must include a complete list of all storage servers. -
Verify accessibility of all grid disks from one of the original database servers. The following command can be run as the
root
user or theoracle
user.$ Grid_home/grid/bin/kfod disks=all dscvgroup=true
The output from the command shows grid disks from the original and new storage servers.
-
Add the grid disks from the new storage servers to the existing disk groups using commands similar to the following. You cannot have both high performance disks and high capacity disks in the same disk group.
$ .oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle $ sqlplus / as sysasm SQL> ALTER DISKGROUP data ADD DISK 2> 'o/*/DATA*dm02*' 3> rebalance power 11;
In the preceding commands, a Full Rack was added to an existing Oracle Exadata Rack. The prefix for the new rack is
dm02
, and the grid disk prefix isDATA
.The following is an example in which an Oracle Exadata Database Machine Half Rack was upgraded to a Full Rack. The cell host names in the original system were named
dm01cel01
throughdm01cel07
. The new cell host names aredm01cel08
throughdm01cel14
.$ .oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle $ SQLPLUS / AS sysasm SQL> ALTER DISKGROUP data ADD DISK 2> 'o/*/DATA*dm01cel08*', 3> 'o/*/DATA*dm01cel09*', 4> 'o/*/DATA*dm01cel10*', 5> 'o/*/DATA*dm01cel11*', 6> 'o/*/DATA*dm01cel12*', 7> 'o/*/DATA*dm01cel13*', 8> 'o/*/DATA*dm01cel14*' 9> rebalance power 11;
Note:
-
If your system is running Oracle Database 11g release 2 (11.2.0.1), then Oracle recommends a power limit of 11 so that the rebalance completes as quickly as possible. If your system is running Oracle Database 11g release 2 (11.2.0.2), then Oracle recommends a power limit of 32. The power limit does have an impact on any applications that are running during the rebalance.
-
Ensure the
ALTER DISKGROUP
commands are run from different Oracle ASM instances. That way, the rebalance operation for multiple disk groups can run in parallel. -
Add disks to all disk groups including
SYSTEMDG
orDBFS_DG
. -
When adding servers with 3 TB High Capacity (HC) disks to existing servers with 2 TB disks, it is recommended to follow the procedure in My Oracle Support note 1476336.1 to properly define the grid disks and disk groups. At this point of setting up the rack, the new grid disks should be defined, but need to be placed into disk groups. Refer to the steps in My Oracle Support note 1476336.1.
-
If the existing storage servers have High Performance (HP) disks and you are adding storage servers with High Capacity (HC) disks, or the existing storage servers have HC disks and you are adding storage servers with HP disks, then you must place the new disks in new disk groups. It is not permitted to mix HP and HC disks within the same disk group.
-
-
Monitor the status of the rebalance operation using a query similar to the following from any Oracle ASM instance:
SQL> SELECT * FROM GV$ASM_OPERATION WHERE STATE = 'RUN';
The remaining tasks can be done while the rebalance is in progress.
See Also:
-
Obtaining Current Configuration Information for information about the existing grid disks.
-
Setting Up New Servers for information about configuring the grid disks.
-
Oracle Automatic Storage Management Administrator's Guide for information about the
ASM_POWER_LIMIT
parameter.
Parent topic: Configuring the New Hardware
3.7 Adding Servers to a Cluster
This procedure describes how to add servers to a cluster.
For adding nodes to an Oracle VM cluster, refer to Expanding an Oracle VM RAC Cluster on Exadata in Oracle Exadata Database Machine Maintenance Guide.
Caution:
If Oracle Clusterware manages additional services that are not yet installed on the new nodes, such as Oracle GoldenGate, then note the following:
-
It may be necessary to stop those services on the existing node before running the
addNode.sh
script. -
It is necessary to create any users and groups on the new database servers that run these additional services.
-
It may be necessary to disable those services from auto-start so that Oracle Clusterware does not try to start the services on the new nodes.
Note:
To prevent problems with transferring files between existing and new nodes, you need to set up SSH equivalence. See Step 4 in Expanding an Oracle VM Oracle RAC Cluster on Exadata in for details.-
Ensure the
/etc/oracle/cell/network-config/*.ora
files are correct and consistent on all database servers. Thecellip.ora
file all database server should include the older and newer database servers and storage servers. -
Ensure the
ORACLE_BASE
anddiag
destination directories have been created on the Oracle Grid Infrastructure destination home.The following is an example for Oracle Grid Infrastructure 11
g
:# dcli -g /root/new_group_files/dbs_group -l root mkdir -p \ /u01/app/11.2.0/grid /u01/app/oraInventory /u01/app/grid/diag # dcli -g /root/new_group_files/dbs_group -l root chown -R grid:oinstall \ /u01/app/11.2.0 /u01/app/oraInventory /u01/app/grid # dcli -g /root/new_group_files/dbs_group -l root chmod -R 770 \ /u01/app/oraInventory # dcli -g /root/new_group_files/dbs_group -l root chmod -R 755 \ /u01/app/11.2.0 /u01/app/11.2.0/grid
The following is an example for Oracle Grid Infrastructure 12
c
:# cd / # rm -rf /u01/app/* # mkdir -p /u01/app/12.1.0.2/grid # mkdir -p /u01/app/oracle/product/12.1.0.2/dbhome_1 # chown -R oracle:oinstall /u01
-
Ensure the
inventory
directory and Grid home directories have been created and have the proper permissions. The directories should be owned by the Grid user and theOINSTALL
group. Theinventory
directory should have 770 permission, and the Oracle Grid Infrastructure home directories should have 755.If you are running Oracle Grid Infrastructure 12
c
or later:-
Make sure
oraInventory
does not exist inside/u01/app
. -
Make sure
/etc/oraInst.loc
does not exist.
-
-
Create users and groups on the new nodes with the same user identifiers and group identifiers as on the existing nodes.
Note:
If Oracle Exadata Deployment Assistant (OEDA) was used earlier, then these users and groups should have been created. Check that they do exist, and have the correct UID and GID values. -
Log in as the Grid user on an existing host.
-
Verify the Oracle Cluster Registry (OCR) backup exists.
ocrconfig -showbackup
-
Verify that the additional database servers are ready to be added to the cluster using commands similar to following:
$ cluvfy stage -post hwos -n \ dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \ -verbose $ cluvfy comp peer -refnode dm01db01 -n \ dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \ -orainv oinstall -osdba dba | grep -B 3 -A 2 mismatched $ cluvfy stage -pre nodeadd -n \ dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \ -verbose -fixup -fixupdir /home/grid_owner_name/fixup.d
In the preceding commands, grid_owner_name is the name of the Oracle Grid Infrastructure software owner, dm02db01 through db02db08 are the new database servers, and refnode is an existing database server.
Note:
-
The second and third commands do not display output if the commands complete correctly.
-
An error about a voting disk, similar to the following, may be displayed:
ERROR: PRVF-5449 : Check of Voting Disk location "o/192.168.73.102/ \ DATA_CD_00_dm01cel07(o/192.168.73.102/DATA_CD_00_dm01cel07)" \ failed on the following nodes: Check failed on nodes: dm01db01 dm01db01:No such file or directory … PRVF-5431 : Oracle Cluster Voting Disk configuration check
If such an error occurs:
- If you are running Oracle Grid Infrastructure 11
g
, set the environment variable as follows:$ export IGNORE_PREADDNODE_CHECKS=Y
Setting the environment variable does not prevent the error when running the
cluvfy
command, but it does allow theaddNode.sh
script to complete successfully.- If you are running Oracle Grid Infrastructure 12
c
or later, use the followingaddnode
parameters:-ignoreSysPrereqs -ignorePrereq
In Oracle Grid Infrastructure 12
c
and later,addnode
does not use theIGNORE_PREADDNODE_CHECKS
environment variable. -
If a database server was installed with a certain image and subsequently patched to a later image, then some operating system libraries may be older than the version expected by the
cluvfy
command. This causes thecluvfy
command and possibly theaddNode.sh
script to fail.It is permissible to have an earlier version as long as the difference in versions is minor. For example,
glibc-common-2.5-81.el5_8.2
versusglibc-common-2.5-49
. The versions are different, but both are at version 2.5, so the difference is minor, and it is permissible for them to differ.Set the environment variable
IGNORE_PREADDNODE_CHECKS=Y
before running theaddNode.sh
script or use theaddnode
parameters-ignoreSysPrereqs -ignorePrereq
with theaddNode.sh
script to workaround this problem.
-
-
Ensure that all directories inside the Oracle Grid Infrastructure home on the existing server have their executable bits set. Run the following commands as the
root
user.find /u01/app/11.2.0/grid -type d -user root ! -perm /u+x ! \ -perm /g+x ! -perm o+x find /u01/app/11.2.0/grid -type d -user grid_owner_name ! -perm /u+x ! \ -perm /g+x ! -perm o+x
In the preceding commands, grid_owner_name is the name of the Oracle Grid Infrastructure software owner, and
/u01/app/11.2.0/grid
is the Oracle Grid Infrastructure home directory.If any directories are listed, then ensure the group and others permissions are
+x
. TheGrid_home/network/admin/samples
,$GI_HOME/crf/admin/run/crfmond
, andGrid_home/crf/admin/run/crflogd
directories may need the+x
permissions set.If you are running Oracle Grid Infrastructure 12
c
or later, run commands similar to the following:# chmod -R u+x /u01/app/12.1.0.2/grid/gpnp/gpnp_bcp* # chmod -R o+rx /u01/app/12.1.0.2/grid/gpnp/gpnp_bcp* # chmod o+r /u01/app/12.1.0.2/grid/bin/oradaemonagent /u01/app/12.1.0.2/grid/srvm/admin/logging.properties # chmod a+r /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/*O # chmod a+r /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/*0 # chown -f gi_owner_name:dba /u01/app/12.1.0.2/grid/OPatch/ocm/bin/emocmrsp
The
Grid_home/network/admin/samples
directory needs the+x
permission:chmod -R a+x /u01/app/12.1.0.2/grid/network/admin/samples
-
Run the following command. It is assumed that the Oracle Grid Infrastructure home is owned by the Grid user.
$ dcli -g old_db_nodes -l root chown -f grid_owner_name:dba \ /u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp
-
This step is needed only if you are running Oracle Grid Infrastructure 11
g
. In Oracle Grid Infrastructure 12c
, no response file is needed because the values are specified on the command line.Create a response file,
add-cluster-nodes.rsp
, as the Grid user to add the new servers similar to the following:RESPONSEFILE_VERSION=2.2.1.0.0 CLUSTER_NEW_NODES={dm02db01,dm02db02, \ dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08} CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm0201-vip,dm0202-vip,dm0203-vip,dm0204-vip, \ dm0205-vip,dm0206-vip,dm0207-vip,dm0208-vip}
In the preceding file, the host names
dm02db01
throughdb02db08
are the new nodes being added to the cluster.Note:
The lines listing the server names should appear on one continuous line. They are wrapped in the documentation due to page limitations. -
Ensure most of the files in the
Grid_home/rdbms/audit
andGrid_home/log/diag/*
directories have been moved or deleted before extending a cluster. -
Refer to My Oracle Support note 744213.1 if the installer runs out of memory. The note describes how to edit the
Grid_home/oui/ora-param.ini
file, and change theJRE_MEMORY_OPTIONS
parameter to-Xms512m-Xmx2048m
. -
Add the new servers by running the
addNode.sh
script from an existing server as the Grid user.-
If you are running Oracle Grid Infrastructure 11
g
:$ cd Grid_home/oui/bin $ ./addNode.sh -silent -responseFile /path/to/add-cluster-nodes.rsp
-
If you are running Oracle Grid Infrastructure 12
c
or later, run theaddnode.sh
command with theCLUSTER_NEW_NODES
andCLUSTER_NEW_VIRTUAL_HOSTNAMES
parameters. The syntax is:$ ./addnode.sh -silent "CLUSTER_NEW_NODES={comma_delimited_new_nodes}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={comma_delimited_new_node_vips}"
For example:
$ cd Grid_home/addnode/ $ ./addnode.sh -silent "CLUSTER_NEW_NODES={dm02db01,dm02db02,dm02db03,dm02db04,dm02db05, dm02db06,dm02db07,dm02db08}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm02db01-vip,dm02db02-vip, dm02db03-vip,dm02db04-vip,dm02db05-vip,dm02db06-vip,dm02db07-vip,dm02db08-vip}" -ignoreSysPrereqs -ignorePrereq
-
-
Verify the grid disks are visible from each of the new database servers.
$ Grid_home/grid/bin/kfod disks=all dscvgroup=true
-
Run the
orainstRoot.sh
script as theroot
user when prompted using thedcli
utility.$ dcli -g new_db_nodes -l root \ /u01/app/oraInventory/orainstRoot.sh
-
Disable HAIP on the new servers.
Before running the
root.sh
script, on each new server, set theHAIP_UNSUPPORTED
environment variable toTRUE
.$ export HAIP_UNSUPPORTED=true
-
Run the
Grid_home/root.sh
script on each server sequentially. This simplifies the process, and ensures that any issues can be clearly identified and addressed.Note:
The node identifier is set in order of the nodes where theroot.sh
script is run. Typically, the script is run from the lowest numbered node name to the highest. -
Check the log file from the
root.sh
script and verify there are no problems on the server before proceeding to the next server. If there are problems, then resolve them before continuing. -
Check the status of the cluster after adding the servers.
$ cluvfy stage -post nodeadd -n \ dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08 \ -verbose
-
Check that all servers have been added and have basic services running.
crsctl stat res -t
Note:
It may be necessary to mount disk groups on the new servers. The following commands must be run as the
oracle
user.$ srvctl start diskgroup -g data $ srvctl start diskgroup -g reco
-
If you are running Oracle Grid Infrastructure releases 11.2.0.2 and later, then perform the following steps:
-
Manually add the
CLUSTER_INTERCONNECTS
parameter to the SPFILE for each Oracle ASM instance.ALTER SYSTEM SET cluster_interconnects = '192.168.10.x' \ sid='+ASMx' scope=spfile
-
Restart the cluster on each new server.
-
Verify the parameters were set correctly.
-
Parent topic: Configuring the New Hardware
3.8 Configuring Cell Alerts for New Oracle Exadata Storage Servers
Cell alerts need to be configured for the new Oracle Exadata Storage Servers.
The configuration depends on the type of installation.
-
When extending an Oracle Exadata Database Machine rack:
Manually configure cell alerts on the new storage servers. Use the settings on the original storage servers as a guide. To view the settings on the original storage servers, use a command similar to the following:
dcli -g new_cells_nodes -l celladmin cellcli -e list cell detail
To configure alert notification on the new storage servers, use a command similar to the following:
dcli -g new_cell_nodes -l root "cellcli -e ALTER CELL \ mailServer=\'mail_relay.example.com\' \ smtpPort=25, \ smtpUseSSL=false,smtpFrom=\'DBM dm01\', \ smtpFromAddr=\'storecell@example.com\', \ smtpToAddr=\'dbm-admins@example.com\', \ notificationMethod=\'mail,snmp\', \ notificationPolicy=\'critical,warning,clear\', \ snmpSubscriber=\(\(host=\'snmpserver.example.com, port=162\')\)"
Note:
The backslash character (\
) is used as an escape character for the dcli utility, and as a line continuation character in the preceding command. -
When cabling racks:
Use Oracle Exadata Deployment Assistant (OEDA) to set up e-mail alerts for storage servers as the
root
user from the original rack to the new rack. The utility includes the SetupCellEmailAlerts step to configure alerts.
Parent topic: Configuring the New Hardware
3.9 Adding Oracle Database Software to the New Servers
It is necessary to add the Oracle Database software directory ORACLE_HOME
to the database servers after the cluster modifications are complete, and all the database servers are in the cluster.
-
Check the
Oracle_home/bin
directory for files ending in zero (0), such asnmb0
, that are owned by theroot
user and do not haveoinstall
or world read privileges. Use the following command to modify the file privileges:# chmod a+r $ORACLE_HOME/bin/*0
If you are running Oracle Database release 12c or later, you also have to change permissions for files ending in uppercase O, in addition to files ending in zero.
# chmod a+r $ORACLE_HOME/bin/*O
-
This step is required for Oracle Database 11g only. If you are running Oracle Database 12c, you can skip this step because the directory has already been created.
Create the
ORACLE_BASE
directory for the database owner, if it is different from the Oracle Grid Infrastructure software owner (Grid user) using the following commands:# dcli -g root/new_group_files/dbs_group -l root mkdir -p /u01/app/oracle # dcli -g root/new_group_files/dbs_group -l root chown oracle:oinstall \ /u01/app/oracle
-
Run the following command to set ownership of the
emocmrsp
file in the Oracle Database$ORACLE_HOME
directory:# dcli -g old_db_nodes -l root chown -f oracle:dba \ /u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp
-
This step is required for Oracle Database 11g only. If you are running Oracle Database 12c, then you can skip this step because the values are entered on the command line.
Create a response file,
add-db-nodes.rsp
, as theoracle
owner to add the new servers similar to the following:RESPONSEFILE_VERSION=2.2.1.0.0 CLUSTER_NEW_NODES={dm02db01,dm02db02,dm02db03,dm02db04,dm02db05, \ dm02db06,dm02db07,dm02db08}
Note:
The lines listing the server names should appear on one continuous line. The are wrapped in the document due to page limitations. -
Add the Oracle Database
ORACLE_HOME
directory to the new servers by running theaddNode.sh
script from an existing server as the database owner user.-
If you are running Oracle Grid Infrastructure 11g:
$ cd $ORACLE_HOME/oui/bin $ ./addNode.sh -silent -responseFile /path/to/add-db-nodes.rsp
-
If you are running Oracle Grid Infrastructure 12c, then you specify the nodes on the command line. The syntax is:
./addnode.sh -silent "CLUSTER_NEW_NODES={comma_delimited_new_nodes}"
For example:
$ cd $Grid_home/addnode $ ./addnode.sh -silent "CLUSTER_NEW_NODES={dm02db01,dm02db02,dm02db03,dm02db04,dm02db05, dm02db06,dm02db07,dm02db08}" -ignoreSysPrereqs -ignorePrereq
-
-
Ensure the
$ORACLE_HOME/oui/oraparam.ini
file has the memory settings that match the parameters set in the Oracle Grid Infrastructure home. -
Run the
root.sh
script on each server when prompted as theroot
user using the dcli utility.$ dcli -g new_db_nodes -l root $ORACLE_HOME/root.sh
In the preceding command, new_db_nodes is the file with the list of new database servers.
-
Verify the
ORACLE_HOME
directories have been added to the new servers.# dcli -g /root/all_group -l root du -sm \ /u01/app/oracle/product/11.2.0/dbhome_1
Parent topic: Configuring the New Hardware
3.10 Adding Database Instance to the New Servers
Before adding the database instances to the new servers, check the following:
-
Maximum file size: If any data files have reached their maximum file size, then the
addInstance
command may fail with an ORA-00740 error. Oracle recommends you check that none of the files listed inDBA_DATA_FILES
have reached their maximum size. Files that have reached their maximum should be corrected. -
Online redo logs: If the online redo logs are kept in the directory specified by the
DB_RECOVERY_FILE_DEST
parameter, then ensure the space allocated is sufficient for the additional redo logs for the new instances being added. If necessary, then increase the size for theDB_RECOVERY_FILE_DEST_SIZE
parameter. -
Total number of instances in the cluster: Set the value of the initialization parameter
cluster_database_instances
in the SPFILE for each database to the total number of instances that will be in the cluster after adding the new servers. -
The HugePages settings are correctly configured on the new servers to match the existing servers.
-
Use a command similar the following from an existing database server to add instances to the new servers. In the command, the instance,
dbm9
, is being added for serverdm02db01
.dbca -silent -addInstance -gdbName dbm -nodeList dm02db01 -instanceName dbm9 \ -sysDBAUsername sys
The command must be run for all servers and instances, substituting the server name and instance name, as appropriate.
Note:
If the command fails, then ensure any files that were created, such as redo log files, are cleaned up. ThedeleteInstance
command does not clean log files or data files that were created by theaddInstance
command. -
Add the
CLUSTER_INTERCONNECTS
parameter to each new instance.-
Manually add the
CLUSTER_INTERCONNECTS
parameter to the SPFILE for each new database instance. The additions are similar to the existing entries, but are the RDMA Network Fabric addresses corresponding to the server that each instance runs on. -
Restart the instance on each new server.
-
Verify the parameters were set correctly.
-
Parent topic: Configuring the New Hardware
3.11 Returning the Rack to Service
Use the following procedure to ensure the new hardware is correctly configured and ready for use:
-
Verify the RDMA Network Fabric cables are connected and secure.
- For RoCE, run the
verify_roce_cables.py
script, available from My Oracle Support. - For InfiniBand, run the
/opt/oracle.SupportTools/ibdiagtools/verify-topology
command.
- For RoCE, run the
-
Run the Oracle Exadata Database Machine HealthCheck utility using the steps described in My Oracle Support note 1070954.1.
-
Verify the instance additions using the following commands:
srvctl config database -d dbm srvctl status database -d dbm
-
Check the cluster resources using the following command:
crsctl stat res -t
-
Ensure the original configuration summary report from the original cluster deployment is updated to include all servers. This document should include the calibrate and network verifications for the new rack, and the RDMA Network Fabric cable checks.
-
Conduct a power-off test, if possible. If the new Exadata Storage Servers cannot be powered off, then verify that the new database servers with the new instances can be powered off and powered on, and that all processes start automatically.
Note:
Ensure the Oracle ASM disk rebalance process has completed for all disk groups. Use SQL*Plus to connect to an Oracle ASM instance and issue the following command:
SELECT * FROM gv$asm_operation;
No rows should be returned by the command.
-
Review the configuration settings, such as the following:
- All parallelism settings
- Backup configurations
- Standby site, if any
- Service configuration
- Oracle Database File System (DBFS) configuration, and mount points on new servers (not required for X7 and later servers)
- Installation of Oracle Enterprise HugePage Manager agents on new database servers
- HugePages settings
-
Incorporate the new cell and database servers into Oracle Auto Service Request (ASR).
-
Update Oracle Enterprise Manager Cloud Control to include the new nodes.
Related Topics
Parent topic: Configuring the New Hardware