1 Summary List: Installing Oracle Clusterware

The following is a summary list of installation configuration requirements and commands. This summary is intended to provide an overview of the installation process.

In addition to providing a summary of the Oracle Clusterware installation process, this list also contains configuration information for preparing a system for Automatic Storage Management (ASM) and Oracle Real Application Clusters (Oracle RAC) installation.

1.1 Verify System Requirements

For more information, review the following section in Chapter 2:

"Checking the Hardware Requirements"

Enter the following commands to check available memory:

grep "Physical:" /var/adm/syslog/syslog.log
/usr/sbin/swapinfo -a

On Itanium processor systems, you can use the following command:

# /usr/contrib/bin/machinfo  | grep -i Memory

The minimum required RAM is 1 GB, and the minimum required swap space is 1 GB. Oracle recommends that you set swap space to twice the amount of RAM for systems with 2 GB of RAM or less. For systems with 2 GB to 8 GB RAM, use swap space equal to RAM. For systems with over 8 GB RAM, use .75 times the size of RAM.

bdf 

This command checks the available space on file systems. If you use standard redundancy for Oracle Clusterware files, which is 2 Oracle Cluster Registry (OCR) partitions and 3 voting disk partitions, then you should have at least 1 GB of disk space available on separate physical disks reserved for Oracle Clusterware files. Each partition for the Oracle Clusterware files should be 256 MB in size.

The Oracle Clusterware home requires 650 MB of disk space.

bdf /tmp

Ensure that you have at least 400 MB of disk space in /tmp. If this space is not available, then increase the partition size, or delete unnecessary files in /tmp.

1.2 Check Network Requirements

For more information, review the following section in Chapter 2:

"Checking the Network Requirements"

The following is a list of address requirements that you must configure on a domain name server (DNS), or configure in the /etc/hosts file for each cluster node:

  • You must have three network addresses for each node:

    • A public IP address

    • A virtual IP address, which is used by applications for failover in the event of node failure

    • A private IP address, which is used by Oracle Clusterware and Oracle RAC for internode communication

  • The virtual IP address has the following requirements:

    • The IP address and host name are currently unused (it can be registered in a DNS, but should not be accessible by a ping command)

    • The virtual IP address is on the same subnet as your public interface

  • The private IP address has the following requirements:

    • It should be on a subnet reserved for private networks, such as 10.0.0.0 or 192.168.0.0

    • It should use dedicated switches or a physically separate, private network, reachable only by the cluster member nodes, preferably using high-speed NICs

    • It must use the same private interfaces for both Oracle Clusterware and Oracle RAC private IP addresses

    • It cannot be registered on the same subnet that is registered to a public IP address

After you obtain the IP addresses from a network administrator, you can use the utility system-config-network to assign the public and private IP addresses to NICs, or you can configure them manually using ifconfig. Do not assign the VIP address.

Ping all IP addresses. The public and private IP addresses should respond to ping commands. The VIP addresses should not respond.

1.3 Check Operating System Packages

Refer to the tables listed in Chapter 2 "Identifying Software Requirements" for details.

1.4 Set Kernel Parameters

For more information, review the following section in Chapter 2:

"Configuring Kernel Parameters"

Start System Administration Manager (SAM) using the following command:

# /usr/sbin/sam

Ensure that kernel values are equivalent or greater to the values listed in Table 2-3.

1.5 Configure Groups and Users

For more information, review the following sections in Chapter 2:

"Overview of Groups and Users for Oracle Clusterware Installations"

For information about creating Oracle Database homes, review the following sections in Chapter 3:

"Creating Standard Configuration Operating System Groups and Users"

"Creating Custom Configuration Groups and Users for Job Roles"

For purposes of evaluation, we will assume that you have one Oracle installation owner, and that this oracle installation software owner name is oracle. You must create an Oracle installation owner group (oinstall) for Oracle Clusterware. If you intend to install Oracle Database, then you must create an OSDBA group (dba). Use the id oracle command to confirm the correct group and user configuration.

/usr/sbin/groupadd oinstall
/usr/sbin/groupadd dba
/usr/sbin/useradd -m -g oinstall -G dba oracle
id oracle

Set the password on the oracle account:

passwd oracle

1.6 Create Directories

For more information, review the following section in Chapter 2:

"Requirements for Creating an Oracle Clusterware Home Directory"

For information about creating Oracle Database homes, review the following sections in Chapter 3:

"Understanding the Oracle Base Directory Path"

"Creating the Oracle Base Directory Path"

For installations with Oracle Clusterware only, Oracle recommends that you let Oracle Universal Installer (OUI) create the Oracle Clusterware and Oracle Central Inventory (oraInventory) directories for you. However, as root, you must create a path compliant with Oracle Optimal Flexible Architecture (OFA) guidelines, so that OUI can select that directory during installation. For OUI to recognize the path as an Oracle software path, it must be in the form u0[1-9]/app.

For example:

mkdir –p  /u01/app
chown –R oracle:oinstall /u01/app

1.7 Configure Oracle Installation Owner Shell Limits

For information, review the following section in Chapter 2:

"Configuring Software Owner User Environments"

1.8 Configure SSH

For information, review the following section in Chapter 2:

"Configuring SSH or RCP on All Cluster Nodes"

To configure SSH, complete the following tasks:

1.8.1 Check Existing SSH Configuration on the System

To determine if SSH is running, enter the following command:

$ ps -ef |grep sshd

If SSH is running, then the response to this command is one or more process ID numbers. In the home directory of the software owner that you want to use for the installation (grid, oracle), use the command ls -al to ensure that the .ssh directory is owned and writable only by the user.

1.8.2 Configure SSH on Cluster Member Nodes

Complete the following tasks on each node. You must configure SSH separately for each Oracle software installation owner that you intend to use for installation.

  • Create .ssh, and create either RSA or DSA keys on each node

  • Add all keys to a common authorized_keys file

1.8.3 Enable SSH User Equivalency on Cluster Member Nodes

After you have copied the authorized_keys file that contains all keys to each node in the cluster, start SSH on the node, and load SSH keys into memory. Note that you must either use this terminal session for installation, or reload SSH keys into memory for the terminal session from which you run the installation.

1.9 Create Storage

The following outlines the procedure for creating OCR and voting disk partitions on disk devices, and creating ASM disks.

For information, review the following sections in Chapter 4:

"Configuring Storage for Oracle Clusterware Files on a Supported Shared File System"

"Configuring Storage for Oracle Clusterware Files on Raw Devices"

1.9.1 Create Disk Partitions for ASM Files OCR Disks, and Voting Disks

Create partitions as needed. For OCR and voting disks, create 280MB partitions for new installations, or use existing partition sizes for upgrades.

Note:

Every server running one or more database instances that use ASM for storage has an ASM instance. In an Oracle RAC environment, there is one ASM instance for each node, and the ASM instances communicate with each other on a peer-to-peer basis.

Only one ASM instance is permitted for each node regardless of the number of database instances on the node.

If you are upgrading an existing installation, then shut down ASM instances before starting installation, unless otherwise instructed in the upgrade procedure for your platform.

The following outlines the procedure for creating ASM, OCR or voting disk partitions without HP Serviceguard:

To configure shared raw disk devices for Oracle Clusterware files, database files, or both:

  1. If you intend to use raw disk devices for database file storage, then choose a name for the database that you want to create.

    The name that you choose must start with a letter and have no more than four characters, for example, orcl.

  2. Identify or configure the required disk devices.

    The disk devices must be shared on all of the cluster nodes.

  3. To ensure that the disks are available, enter the following command on every node:

    # /usr/sbin/ioscan -fun -C disk
    

    The output from this command is similar to the following:

    Class  I  H/W Path    Driver S/W State   H/W Type     Description
    ==========================================================================
    disk    0  0/0/1/0.6.0 sdisk  CLAIMED     DEVICE       HP   DVD-ROM 6x/32x
                           /dev/dsk/c0t6d0   /dev/rdsk/c0t6d0
    disk    1  0/0/1/1.2.0 sdisk  CLAIMED     DEVICE      SEAGATE ST39103LC
                           /dev/dsk/c1t2d0   /dev/rdsk/c1t2d0
    

    This command displays information about each disk attached to the system, including the character raw device name (/dev/rdsk/cxtydz).

  4. If the ioscan command does not display device name information for a device that you want to use, then enter the following command to install the special device files for any new devices:

    # /usr/sbin/insf -e
    
  5. For each disk that you want to use, enter the following command on any node to verify that it is not already part of an LVM volume group:

    # /sbin/pvdisplay /dev/dsk/cxtydz
    

    If this command displays volume group information, then the disk is already part of a volume group. The disks that you choose must not be part of an LVM volume group.

    Note:

    If you are using different volume management software, for example VERITAS Volume Manager, then refer to the appropriate documentation for information about verifying that a disk is not in use.
  6. If the ioscan command shows different device names for the same device on any node, then:

    1. Change directory to the /dev/rdsk directory.

    2. Enter the following command to list the raw disk device names and their associated major and minor numbers:

      # ls -la
      

      The output from this command is similar to the following for each disk device:

      crw-r--r--   1 bin        sys        188 0x032000 Nov  4  2003 c3t2d0
      

      In this example, 188 is the device major number and 0x32000 is the device minor number.

    3. Enter the following command to create a new device file for the disk that you want to use, specifying the same major and minor number as the existing device file:

      Note:

      Oracle recommends that you use the alternative device file names shown in the previous table.
      # mknod ora_ocr_raw_256m c 188 0x032000
      
    4. Repeat these steps on each node, specifying the correct major and minor numbers for the new device files on each node.

  7. Enter commands similar to the following on every node to change the owner, group, and permissions on the character raw device file for each disk device that you want to use:

    Note:

    If you are using a multi-pathing disk driver with Automatic Storage Management, then ensure that you set the permissions only on the correct logical device name for the disk.

    If you created an alternative device file for the device, then set the permissions on that device file.

    • OCR:

      # chown root:oinstall /dev/rdsk/cxtydz
      # chmod 640 /dev/rdsk/cxtydz
      
    • Oracle Clusterware voting disk or database files:

      # chown oracle:dba /dev/rdsk/cxtydz
      # chmod 660 /dev/rdsk/cxtydz
      
  8. If you are using raw disk devices for database files, then follow these steps to create the Database Configuration Assistant raw device mapping file:

    Note:

    You must complete this procedure only if you are using raw devices for database files. The Database Configuration Assistant raw device mapping file enables Database Configuration Assistant to identify the appropriate raw disk device for each database file. You do not specify the raw devices for the Oracle Clusterware files in the Database Configuration Assistant raw device mapping file.
    1. Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:

      • Bourne or Korn shell:

        $ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
        
      • C shell:

        % setenv ORACLE_BASE /u01/app/oracle
        
    2. Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:

      # mkdir -p $ORACLE_BASE/oradata/dbname
      # chown -R oracle:oinstall $ORACLE_BASE/oradata
      # chmod -R 775 $ORACLE_BASE/oradata
      

      In this example, dbname is the name of the database that you chose previously.

    3. Change directory to the $ORACLE_BASE/oradata/dbname directory.

    4. Using any text editor, create a text file similar to the following that identifies the disk device file name associated with each database file.

      Oracle recommends that you use a file name similar to dbname_raw.conf for this file.

      Note:

      The following example shows a sample mapping file for a two-instance RAC cluster. Some of the devices use alternative disk device file names. Ensure that the device file name that you specify identifies the same disk device on all nodes.
      system=/dev/rdsk/c2t1d1
      sysaux=/dev/rdsk/c2t1d2
      example=/dev/rdsk/c2t1d3
      users=/dev/rdsk/c2t1d4
      temp=/dev/rdsk/c2t1d5
      undotbs1=/dev/rdsk/c2t1d6
      undotbs2=/dev/rdsk/c2t1d7
      redo1_1=/dev/rdsk/c2t1d8
      redo1_2=/dev/rdsk/c2t1d9
      redo2_1=/dev/rdsk/c2t1d10
      redo2_2=/dev/rdsk/c2t1d11
      control1=/dev/rdsk/c2t1d12
      control2=/dev/rdsk/c2t1d13
      spfile=/dev/rdsk/dbname_spfile_raw_5m
      pwdfile=/dev/rdsk/dbname_pwdfile_raw_5m
      

      In this example, dbname is the name of the database.

      Use the following guidelines when creating or editing this file:

      • Each line in the file must have the following format:

        database_object_identifier=device_file_name
        

        The alternative device file names suggested in the previous table include the database object identifier that you must use in this mapping file. For example, in the following alternative disk device file name, redo1_1 is the database object identifier:

        rac_redo1_1_raw_120m
        
      • For a RAC database, the file must specify one automatic undo tablespace datafile (undotbsn) and two redo log files (redon_1, redon_2) for each instance.

      • Specify at least two control files (control1, control2).

      • To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs) instead of the automatic undo management tablespace data files.

    5. Save the file and note the file name that you specified.

    6. When you are configuring the oracle user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.

  9. When you are installing Oracle Clusterware, you must enter the paths to the appropriate device files when prompted for the path of the OCR and Oracle Clusterware voting disk, for example:

    /dev/rdsk/cxtydz
    

1.10 Verify Oracle Clusterware Requirements with CVU

For information, review the following section in Chapter 6:

"Verifying Oracle Clusterware Requirements with CVU"

Using the following command syntax, log in as the installation owner user (oracle or grid), and start Cluster Verification Utility (CVU) to check system requirements for installing Oracle Clusterware. In the following syntax example, replace the variable mountpoint with the installation media mountpoint, and replace the variable node_list with the names of the nodes in your cluster, separated by commas:

/mountpoint/runcluvfy.sh stage -pre crsinst -n node_list

1.11 Install Oracle Clusterware Software

For information, review the following sections in Chapter 6:

"Preparing to Install Oracle Clusterware with OUI"

"Installing Oracle Clusterware with OUI"

  1. Ensure SSH keys are loaded into memory for the terminal session from which you rn the Oracle Universal Installer (OUI).

  2. Navigate to the installation media, and start OUI. For example:

    $ cd /Disk1
    ./runInstaller
    
  3. Select Install Oracle Clusterware, and enter the configuration information as prompted.

1.12 Prepare the System for Oracle RAC and ASM

For information, review the following section in Chapter 5:

"Configuring Disks for Automatic Storage Management"

If you intend to install Oracle RAC, as well as Oracle Clusterware, then Oracle recommends that you use ASM for database file management.