|Oracle® Clusterware Installation Guide
11g Release 1 (11.1) for Solaris Operating System
|PDF · Mobi · ePub|
This chapter describes the procedures for installing Oracle Clusterware for Solaris Operating System. If you are installing Oracle Database with Oracle Real Application Clusters (Oracle RAC), then this is phase one of a two-phase installation.
This chapter contains the following topics:
Using the following command syntax, log in as the installation owner user (
crs), and start Cluster Verification Utility (CVU) to check system requirements for installing Oracle Clusterware:
/mountpoint/runcluvfy.sh stage -pre crsinst -n node_list
In the preceding syntax example, replace the variable
mountpoint with the installation media mountpoint, and replace the variable
node_list with the names of the nodes in your cluster, separated by commas.
For example, for a cluster with mountpoint
/mnt/dvdrom/, and with nodes
node3, enter the following command:
$ /mnt/dvdrom/runcluvfy.sh stage -pre crsinst -n node1,node2,node3
The Oracle Clusterware preinstallation stage check verifies the following:
Node Reachability: All of the specified nodes are reachable from the local node.
User Equivalence: Required user equivalence exists on all of the specified nodes.
Node Connectivity: Connectivity exists between all the specified nodes through the public and private network interconnections, and at least one subnet exists that connects each node and contains public network interfaces that are suitable for use as virtual IPs (VIPs).
Administrative Privileges: The
oracle user has proper administrative privileges to install Oracle Clusterware on the specified nodes.
Shared Storage Accessibility: If specified, the OCR device and voting disk are shared across all the specified nodes.
System Requirements: All system requirements are met for installing Oracle Clusterware software, including kernel version, kernel parameters, memory, swap directory space, temporary directory space, and required users and groups.
Kernel Packages: All required operating system software packages are installed.
Node Applications: The virtual IP (VIP), Oracle Notification Service (ONS) and Global Service Daemon (GSD) node applications are functioning on each node.
If the Cluster Verification Utility report indicates that your system fails to meet the requirements for Oracle Clusterware installation, then use the topics in this section to correct the problem or problems indicated in the report, and run the Cluster Verification Utility command again.
oracleuser configuration to ensure that the user configuration is properly completed, and that SSH configuration is properly completed.
See Also:"Creating Identical Users and Groups on Other Cluster Nodes" in Chapter 3, and "Configuring SSH on All Cluster Nodes" in Chapter 2 for user equivalency configuration instructions
$ ssh node_name date
The output from this command should be the timestamp of the remote node identified by the value that you use for
ssh is in the default location, the
/usr/bin directory, then use
ssh to configure user equivalence. You can also use
rsh to confirm user equivalence.
If you have not attempted to use SSH to connect to the host node before running, then Cluster Verification Utility indicates a user equivalence error. If you see a message similar to the following when entering the date command with SSH, then this is the probable cause of the user equivalence error:
The authenticity of host 'node1 (188.8.131.52)' can't be established. RSA key fingerprint is 7z:ez:e7:f6:f4:f2:4f:8f:9z:79:85:62:20:90:92:z9. Are you sure you want to continue connecting (yes/no)?
Enter yes, and then run Cluster Verification Utility again to determine if the user equivalency error is resolved.
ssh is in a location other than the default,
/usr/bin, then Cluster Verification Utility reports a user equivalence check failure. To avoid this error, navigate to the directory
$CV_HOME/cv/admin, open the file
cvu_config with a text editor, and add or update the key
ORACLE_SRVM_REMOTESHELL to indicate the ssh path location on your system. For example:
# Locations for ssh and scp commands ORACLE_SRVM_REMOTESHELL=/usr/local/bin/ssh ORACLE_SRVM_REMOTECOPY=/usr/local/bin/scp
Note the following rules for modifying the
Key entries have the syntax name=value
Each key entry and the value assigned to the key defines one property only
Lines beginning with the number sign (#) are comment lines, and are ignored
Lines that do not follow the syntax name=value are ignored
When you have changed the path configuration, run Cluster Verification Utility again. If ssh is in another location than the default, you also need to start OUI with additional arguments to specify a different location for the remote shell and remote copy commands. Enter
runInstaller -help to obtain information about how to use these arguments.
Note:When you or OUI run
rshcommands, including any login or other shell scripts they start, you may see errors about invalid arguments or standard input if the scripts generate any output. You should correct the cause of these errors.
To stop the errors, remove all commands from the
oracle user's login scripts that generate output when you run
If you see messages about X11 forwarding, then complete the task "Setting Display and X11 Forwarding Configuration" to resolve this issue.
If you see errors similar to the following:
stty: standard input: Invalid argument stty: standard input: Invalid argument
These errors are produced if hidden files on the system (for example,
stty commands. If you see these errors, then refer to Chapter 2, "Preventing Oracle Clusterware Installation Errors Caused by stty Commands" to correct the cause of these errors.
addressto check each node address. When you find an address that cannot be reached, check your list of public and private addresses to make sure that you have them correctly configured. If you use vendor clusterware, then refer to the vendor documentation for assistance. Ensure that the public and private network interfaces have the same interface names on each node of your cluster.
idcommand on each node to confirm that the
oracleuser is created with the correct group membership. Ensure that you have created the required groups, and create or modify the user account on affected nodes to establish required group membership.
See Also:"Creating Standard Configuration Operating System Groups and Users" in Chapter 3 for instructions about how to create required groups, and how to configure the
Before you install Oracle Clusterware with Oracle Universal Installer (OUI), use the following checklist to ensure that you have all the information you will need during installation, and to ensure that you have completed all tasks that must be done before starting to install Oracle Clusterware. Mark the box for each task as you complete it, and write down the information needed, so that you can provide it during installation.
Shut Down Running Oracle Processes
If you are installing Oracle Clusterware on a node that already has a single-instance Oracle Database 11g release 1 (11.1) installation, then stop the existing ASM instances. After Oracle Clusterware is installed, start up the ASM instances again. When you restart the single-instance Oracle database, the ASM instances use the Cluster Synchronization Services (CSSD) Daemon from Oracle Clusterware instead of the CSSDdaemon for the single-instance Oracle database.
You can upgrade some or all nodes of an existing Cluster Ready Services installation. For example, if you have a six-node cluster, then you can upgrade two nodes each in three upgrading sessions.Base the number of nodes that you upgrade in each session on the load the remaining nodes can handle. This is called a "rolling upgrade."
If a Global Services Daemon (GSD) from Oracle9i Release 9.2 or earlier is running, then stop it before installing Oracle Database 11g release 1 (11.1) Oracle Clusterware by running the following command:
$ Oracle_home/bin/gsdctl stop
Oracle_home is the Oracle Database home that is running the GSD.
Caution:If you have an existing Oracle9i release 2 (9.2) Oracle Cluster Manager (Oracle CM) installation, then do not shut down the Oracle CM service. Shutting down the Oracle CM service prevents the Oracle Clusterware 11g release 1 (11.1) software from detecting the Oracle9i release 2 nodelist, and causes failure of the Oracle Clusterware installation.
Note:If you receive a warning to stop all Oracle services after starting OUI, then run the command
Oracle_home is the home that is running CSS.
During an Oracle Clusterware installation, if OUI detects an existing Oracle Database 10g release 1 (10.1) Cluster Ready Services (CRS), then you are given the option to perform a rolling upgrade by installing Oracle Database 11g release 1 (11.1) Oracle Clusterware on a subset of cluster member nodes.
If you intend to perform a rolling upgrade, then you should shut down the CRS stack on the nodes you intend to upgrade, and unlock the Oracle Clusterware home using the script
/clusterware/upgrade/preupdate.sh, which is available on the 11g release 1 (11.1) installation media.
If you intend to perform a standard upgrade, then shut down the CRS stack on all nodes, and unlock the Oracle Clusterware home using the script
When you run OUI and select the option to install Oracle Clusterware on a subset of nodes, OUI installs Oracle Database 11g release 1 (11.1) Oracle Clusterware software into the existing Oracle Clusterware home on the local and remote node subset. When you run the root script, it starts the Oracle Clusterware 11g release 1 (11.1) stack on the subset cluster nodes, but lists it as an inactive version.
When all member nodes of the cluster are running Oracle Clusterware 11g release 1 (11.1), then the new clusterware becomes the active version.
If you intend to install Oracle RAC, then you must first complete the upgrade to Oracle Clusterware 11g release 1 (11.1) on all cluster member nodes before you install the Oracle Database 11g release 1 (11.1) version of Oracle RAC.
Determine the Oracle Inventory location
If you have already installed Oracle software on your system, then OUI detects the existing Oracle Inventory directory from the
/etc/oraInst.loc file, and uses this location.
If you are installing Oracle software for the first time on your system, and your system does not have an Oracle inventory, then you are asked to provide a path for the Oracle inventory, and you are also asked the name of the Oracle Inventory group (typically,
See Also:The preinstallation chapter, Chapter 2 for information about creating the Oracle Inventory, and completing required system configuration
Obtain root account access
During installation, you are asked to run configuration scripts as the root user. You must run these scripts as root, or be prepared to have your system administrator run them for you. Note that these scripts must be run in sequence. If you attempt to run scripts simultaneously, then the installation will fail.
During installation, you are asked if you want translation of user interface text into languages other than the default, which is English.
Note:If the language set for the operating system is not supported by Oracle Universal Installer, then Oracle Universal Installer, by default, runs in the English language.
See Also:Oracle Database Globalization Support Guide for detailed information on character sets and language configuration
Determine your cluster name, public node names, private node names, and virtual node names for each node in the cluster
If you install the clusterware during installation, and are not using third-party vendor clusterware, then you are asked to provide a public node name and a private node name for each node. If you use third-party clusterware, then use your vendor documentation to complete setup of your public and private domain addresses.
When you enter the public node name, use the primary host name of each node. In other words, use the name displayed by the
hostname command. This node name can be either the permanent or the virtual host name.
In addition, ensure that the following are true:
It must be globally unique throughout your host domain.
It must be at least one character long and less than 15 characters long.
It must consist of the same character set used for host names: underscores (_), hyphens (-), and single-byte alphanumeric characters (a to z, A to Z, and 0 to 9). If you use vendor clusterware, then Oracle recommends that you use the vendor cluster name.
Determine a private node name or private IP address for each node. The private IP address is an address that is accessible only by the other nodes in this cluster. Oracle Database uses private IP addresses for internode, or instance-to-instance Cache Fusion traffic. Oracle recommends that you provide a name in the format public_hostname-priv. For example:
Determine a virtual host name for each node. A virtual host name is a public node name that is used to reroute client requests sent to the node if the node is down. Oracle Database uses VIPs for client-to-database connections, so the VIP address must be publicly accessible. Oracle recommends that you provide a name in the format public_hostname-vip. For example:
Note:The following is a list of additional information about node IP addresses:
For the local node only, OUI automatically fills in public, private, and VIP fields. If your system uses vendor clusterware, then OUI may fill additional fields.
Host names, private names, and virtual host names are not domain-qualified. If you provide a domain in the address field during installation, then OUI removes the domain from the address.
Private IP addresses should not be accessible as public interfaces. Using public interfaces for Cache Fusion can cause performance problems.
Identify shared storage for Oracle Clusterware files and prepare disk partitions if necessary
During installation, you are asked to provide paths for two files that must be shared across all nodes of the cluster, either on a shared raw device, or a shared file system file:
The voting disk must be owned by the user performing the installation (
crs), and must have permissions set to 640.
The Oracle Cluster Registry (OCR) contains cluster and database configuration information for the Oracle RAC database and for Oracle Clusterware, including the node list, and other information about cluster configuration and profiles.
The OCR disk must be owned by the user performing the installation (
oracle. That installation user must have
oinstall as its primary group. The OCR disk partitions must have permissions set to 640, though permissions files used with system restarts should have ownership set to
root:oinstall. During installation, OUI changes ownership of the OCR disk partitions to root. Provide at least 280 MB disk space for the OCR partitions.
If your disks do not have external storage redundancy, then Oracle recommends that you provide one additional location for the OCR disk, and two additional locations for the voting disk, for a total of five partitions (two for OCR, and three for voting disks). Creating redundant storage locations protects the OCR and voting disk in the event of a disk failure on the partitions you choose for the OCR and the voting disk.
See Also:Chapter 4
Complete the following steps to install Oracle Clusterware on your cluster. At any time during installation, if you have a question about what you are being asked to do, click the Help button on the OUI page.
Unless you have the same terminal window open that you used to set up SSH, enter the following commands:
$ exec /usr/bin/ssh-agent $SHELL $ /usr/bin/ssh-add
runInstaller command from the
/Disk1 directory on the Oracle Database 11g release 1 (11.1) installation media.
Provide information or run scripts as root when prompted by OUI. If you need assistance during installation, click Help.
Note:You must run
root.shscripts one at a time. Do not run
After you run
root.sh on all the nodes, OUI runs the Oracle Notification Server Configuration Assistant, Oracle Private Interconnect Configuration Assistant, and Cluster Verification Utility. These programs run without user intervention.
When you have verified that your Oracle Clusterware installation is completed successfully, you can either use it to maintain high availability for other applications, or you can install an Oracle database.
If you intend to install Oracle Database 11g release 1 (11.1) with Oracle RAC, then refer to Oracle Real Application Clusters Installation Guide for Solaris Operating System. If you intend to use Oracle Clusterware by itself, then refer to the single-instance Oracle Database installation guide.
See Also:Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for information about using cloning and node addition procedures, and Oracle Clusterware Administration and Deployment Guide for cloning Oracle Clusterware
During installation of Oracle Clusterware, on the Specify Cluster Configuration page, you are given the option either of providing cluster configuration information manually, or of using a cluster configuration file. A cluster configuration file is a text file that you can create before starting OUI, which provides OUI with information about the cluster name and node names that it requires to configure the cluster.
Oracle suggests that you consider using a cluster configuration file if you intend to perform repeated installations on a test cluster, or if you intend to perform an installation on many nodes.
To create a cluster configuration file:
On the installation media, navigate to the directory
Using a text editor, open the response file
crs.rsp, and find the section
Follow the directions in that section for creating a cluster configuration file.
The following is a list of some common Oracle Clusterware installation issues, and how to resolve them.
$ exec /usr/bin/ssh-agent $SHELL $ /usr/bin/ssh-add
Note that you must have the passphrase used to set up SSH. If you are not the person who set up SSH, then obtain the passphrase. Note also that the .ssh folder in the user home that is performing the installation must be set with 600 permissions.
In addition, confirm group membership by entering the id command, and entering ID username. For example:
$ id $ id oracle
crs) performing the installation. During installation, these permissions are changed to root ownership.
After installation, log in as root, and use the following command syntax to confirm that your Oracle Clusterware installation is installed and running correctly:
CRS_home/bin/crs_stat -t -v
[root@node1 /]:/u01/app/crs/bin/crs_stat -t -v Name a Type R/RA F/FT Target State Host crs....ac3.gsd application 0/5 0/0 Online Online node1 crs....ac3.ons application 0/5 0/0 Online Online node1 crs....ac3.vip application 0/5 0/0 Online Online node1 crs....ac3.gsd application 0/5 0/0 Online Online node2 crs....ac3.ons application 0/5 0/0 Online Online node2 crs....ac3.vip application 0/5 0/0 Online Online node2
You can also use the command crsctl check crs for a less detailed system check. for example:
[root@node1 bin] $ ./crsctl check crs Cluster Synchronization Services appears healthy Cluster Ready Services appears healthy Event Manager appears healthy
Caution:After installation is complete, do not remove manually or run cron jobs that remove
/var/tmp/.oracleor its files while Oracle Clusterware is up. If you remove these files, then Oracle Clusterware could encounter intermittent hangs, and you will encounter error CRS-0184: Cannot communicate with the CRS daemon.