1 Summary List: Installing Oracle Clusterware

The following is a summary list of installation configuration requirements and commands. This summary is intended to provide an overview of the installation process. It is written for Asianux Server, Oracle Linux or Red Hat Enterprise Linux; some tasks are slightly different for SUSE Linux.

In addition to providing a summary of the Oracle Clusterware installation process, this list also contains configuration information for preparing a system for Automatic Storage Management (ASM) and Oracle Real Application Clusters (Oracle RAC) installation.

1.1 Verify System Requirements

For more information, review the following section in Chapter 2:

"Checking the Hardware Requirements"

Enter the following commands to check available memory:

grep MemTotal /proc/meminfo
grep SwapTotal /proc/meminfo

The minimum required RAM is 1 GB, and the minimum required swap space is 1 GB. Oracle recommends that you set swap space to twice the amount of RAM for systems with 2 GB of RAM or less. For systems with 2 GB to 8 GB RAM, use swap space equal to RAM. For systems with over 8 GB RAM, use .75 times the size of RAM.

df -h

This command checks the available space on file systems. If you use standard redundancy for Oracle Clusterware files, which is 2 Oracle Cluster Registry (OCR) partitions and 3 voting disk partitions, then you should have at least 1 GB of disk space available on separate physical disks reserved for Oracle Clusterware files. Each partition for the Oracle Clusterware files should be 256 MB in size.

The Oracle Clusterware home requires 650 MB of disk space.

df -h /tmp

Ensure that you have at least 400 MB of disk space in /tmp. If this space is not available, then increase the partition size, or delete unnecessary files in /tmp.

1.2 Check Network Requirements

For more information, review the following section in Chapter 2:

"Checking the Network Requirements"

The following is a list of address requirements that you must configure on a domain name server (DNS), or configure in the /etc/hosts file for each cluster node:

  • You must have three network addresses for each node:

    • A public IP address

    • A virtual IP address, which is used by applications for failover in the event of node failure

    • A private IP address, which is used by Oracle Clusterware and Oracle RAC for internode communication

  • The virtual IP address has the following requirements:

    • The IP address and host name are currently unused (it can be registered in a DNS, but should not be accessible by a ping command)

    • The virtual IP address is on the same subnet as your public interface

  • The private IP address has the following requirements:

    • It should be on a subnet reserved for private networks, such as 10.0.0.0 or 192.168.0.0

    • It should use dedicated switches or a physically separate, private network, reachable only by the cluster member nodes, preferably using high-speed NICs

    • It must use the same private interfaces for both Oracle Clusterware and Oracle RAC private IP addresses

    • It cannot be registered on the same subnet that is registered to a public IP address

After you obtain the IP addresses from a network administrator, you can use the utility system-config-network to assign the public and private IP addresses to NICs, or you can configure them manually using ifconfig. Do not assign the VIP address.

Ping all IP addresses. The public and private IP addresses should respond to ping commands. The VIP addresses should not respond.

1.3 Check Operating System Packages

Refer to the tables listed inChapter 2 "Identifying Software Requirements"for details, or use a system configuration script such as the Oracle Validated RPM.

1.4 Set Kernel Parameters

For more information, review the following section in Chapter 2:

"Configuring Kernel Parameters"

Using any text editor, create or edit the /etc/sysctl.conf file, and add or edit lines similar to the following:

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.aio-max-size =1048576
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

To enter these kernel settings into the running kernel, enter the following command:

# sysctl -p

1.5 Configure Groups and Users

For more information, review the following sections in Chapter 2:

"Overview of Groups and Users for Oracle Clusterware Installations"

For information about creating Oracle Database homes, review the following sections in Chapter 3:

"Creating Standard Configuration Operating System Groups and Users"

"Creating Custom Configuration Groups and Users for Job Roles"

"Creating Groups and Users for Oracle Clusterware"

For purposes of evaluation, we will assume that you have one Oracle installation owner, and that this oracle installation software owner name is oracle. You must create an Oracle installation owner group (oinstall) for Oracle Clusterware. If you intend to install Oracle Database, then you must create an OSDBA group (dba). Use the id oracle command to confirm the correct group and user configuration.

/usr/sbin/groupadd oinstall
/usr/sbin/groupadd dba
/usr/sbin/useradd -m -g oinstall -G dba oracle
id oracle

Set the password on the oracle account:

passwd oracle

1.6 Create Directories

For more information, review the following section in Chapter 2:

"Requirements for Creating an Oracle Clusterware Home Directory"

For information about creating Oracle Database homes, review the following sections in Chapter 3:

"Understanding the Oracle Base Directory Path"

"Creating the Oracle Base Directory Path"

For installations with Oracle Clusterware only, Oracle recommends that you let Oracle Universal Installer (OUI) create the Oracle Clusterware and Oracle Central Inventory (oraInventory) directories for you. However, as root, you must create a path compliant with Oracle Optimal Flexible Architecture (OFA) guidelines, so that OUI can select that directory during installation. For OUI to recognize the path as an Oracle software path, it must be in the form u0[1-9]/app.

For example:

mkdir –p  /u01/app
chown –R oracle:oinstall /u01/app

1.7 Configure Oracle Installation Owner Shell Limits

For information, review the following section in Chapter 2:

"Configuring Software Owner User Environments"

As root, add the following lines to /etc/profile to set shell limits for the user oracle and for root, for the BASH, Korn, and C shells.

Add this ulimit change for each Oracle software owner you want to create

if [ $USER = "oracle" ]; then
  if [ $SHELL = "/bin/ksh" ]; then
    ulimit -p 16384
    ulimit -n 65536
  else
    ulimit -u 16384 -n 65536
  fi
fi
if [ $USER = "root" ]; then
  if [ $SHELL = "/bin/ksh" ]; then
    ulimit -p 16384
    ulimit -n 65536
  else
    ulimit -u 16384 -n 65536
  fi
fi

1.8 Configure SSH

For information, review the following section in Chapter 2:

"Configuring SSH on All Cluster Nodes"

To configure SSH, complete the following tasks:

1.8.1 Check Existing SSH Configuration on the System

To determine if SSH is running, enter the following command:

$ pgrep sshd

If SSH is running, then the response to this command is one or more process ID numbers. In the home directory of the software owner that you want to use for the installation (crs, oracle), use the command ls -al to ensure that the .ssh directory is owned and writable only by the user.

1.8.2 Configure SSH on Cluster Member Nodes

Complete the following tasks on each node. You must configure SSH separately for each Oracle software installation owner that you intend to use for installation.

  • Create .ssh, and create either RSA or DSA keys on each node

  • Add all keys to a common authorized_keys file

1.8.3 Enable SSH User Equivalency on Cluster Member Nodes

After you have copied the authorized_keys file that contains all keys to each node in the cluster, start SSH on the node, and load SSH keys into memory. Note that you must either use this terminal session for installation, or reload SSH keys into memory for the terminal session from which you run the installation.

1.9 Create Storage

The following outlines the procedure for creating OCR and voting disk partitions on block devices, and creating ASM disks on block devices.

For information, review the following sections in Chapter 4:

"Configuring Storage for Oracle Clusterware Files on a Supported Shared File System"

"Configuring Disk Devices for Oracle Clusterware Files"

1.9.1 Create Disk Partitions for ASM Disks, OCR Disks, and Voting Disks

Create partitions as needed. For OCR and voting disks, create 280MB partitions for new installations, or use existing partition sizes for upgrades.

Note:

Every server running one or more database instances that use ASM for storage has an ASM instance. In an Oracle RAC environment, there is one ASM instance for each node, and the ASM instances communicate with each other on a peer-to-peer basis.

Only one ASM instance is permitted for each node regardless of the number of database instances on the node.

If you are upgrading an existing installation, then shut down ASM instances before starting installation, unless otherwise instructed in the upgrade procedure for your platform.

The following outlines the procedure for setting permissions for an ASM, OCR or voting disk partition:

  1. On each node, create a permissions file in /etc/udev/permissions.d, to change the permissions from default root ownership. On Red Hat Enterprise Linux 4, Oracle Linux4, and Asianux Server, this file should be called 49-oracle.permissions, so that the kernel loads it before 50-udev.permissions. On Asianux Server 3, Oracle Linux 5, Red Hat Enterprise Linux 5, and SUSE Enterprise Server 10 systems, this file should be called 51-oracle.permissions, so that the kernel loads it after 50-udev.permissions.)

    For each OCR partition, the contents of the xx-oracle.permissions file are as follows:

    devicepartition:root:oinstall:0640
    

    For each voting disk partition, the contents of the xx-oracle.permissions file are as follows:

    devicepartition:crs_user:oinstall:0640
    

    For each ASM disk partition, the contents of the xx-oracle.permissions file are as follows:

    devicepartition:oracle_user:OSDBA:0660
    

    For example:

    # OCR disks
    sda1:root:oinstall:0640
    # Voting disks
    sda2:crs:oinstall:0640
    # ASM disks
    sdd1:oracle:dba:0660
    
  2. Run the command /sbin/partprobe from each node.

  3. Enter a command similar to the following to mark a disk as an ASM disk:

    # /etc/init.d/oracleasm createdisk DISK1 /dev/sdb1
    

1.10 Verify Oracle Clusterware Requirements with CVU

For information, review the following section in Chapter 6:

"Verifying Oracle Clusterware Requirements with CVU"

Using the following command syntax, log in as the installation owner user (oracle or crs), and start Cluster Verification Utility (CVU) to check system requirements for installing Oracle Clusterware. In the following syntax example, replace the variable mountpoint with the installation media mountpoint, and replace the variable node_list with the names of the nodes in your cluster, separated by commas:

/mountpoint/runcluvfy.sh stage -pre crsinst -n node_list

1.11 Install Oracle Clusterware Software

For information, review the following sections in Chapter 6:

"Preparing to Install Oracle Clusterware with OUI"

"Installing Oracle Clusterware with OUI"

  1. Ensure SSH keys are loaded into memory for the terminal session from which you rn the Oracle Universal Installer (OUI).

  2. Navigate to the installation media, and start OUI. For example:

    $ cd /Disk1
    ./runInstaller
    
  3. Select Install Oracle Clusterware, and enter the configuration information as prompted.

1.12 Prepare the System for Oracle RAC and ASM

For information, review the following section in Chapter 5:

"Configuring Disks for Automatic Storage Management"

If you intend to install Oracle RAC, as well as Oracle Clusterware, then Oracle recommends that you use ASM for database file management, and install the Linux ASMLIB RPMs to simplify administration. ASMLib 2.0 is delivered as a set of three Linux packages:

  • oracleasmlib-2.0 - the ASM libraries

  • oracleasm-support-2.0 - utilities needed to administer ASMLib

  • oracleasm - a kernel module for the ASM library

Each Linux distribution has its own set of ASMLib 2.0 packages, and within each distribution, each kernel version has a corresponding oracleasm package.

Complete the following procedures on each node that you intend to make a member of the cluster:

1.12.1 Determine the Correct Oracleasm Package

Determine which kernel you are using by logging in as root and running the following command:

uname -rm

For example:

# uname –rm
2.6.9-5.ELsmp i686

The example shows that this is a 2.6.9-5 kernel for an SMP (multiprocessor) server using Intel i686 CPUs.

1.12.2 Download and Install the Oracleasm Package

After you determine the kernel version for your system, complete the following task:

  1. Open a Web browser using the following URL:

    http://www.oracle.com/technology/tech/linux/asmlib/index.html
    
  2. Select the link for your version of Linux.

  3. Download the oracleasmlib and oracleasm-support packages for your version of Linux.

  4. Download the oracleasm package corresponding to your kernel version.

  5. Log in as root and install the ASM packages.

1.12.3 Configure ASMLib

Log in as root, and enter the following command:

# /etc/init.d/oracleasm configure

Provide information as prompted for your system.

1.12.4 Mark ASM Disk Partitions

For OUI to recognize a disk partition as an ASM disk candidate, you must mark the disk by logging in as root and marking the disk partitions that you created for ASM using the following command syntax, where ASM_DISK_NAME is the name of the ASM disk group, and device_name is the name of the disk device that you want to assign to that disk group:

/etc/init.d/oracleasm createdisk ASM_DISK_NAME device_name