Complete both procedures in this section to create and configure the initial Oracle database in a Sun Cluster configuration. If you are creating and configuring additional databases, perform only the procedure described in "How to Create an Oracle Database".
Make sure Sun Cluster is started and the node owns the disk groups.
To start up the cluster, use scadmin(1M) command, as root. This command starts the cluster and assigns ownership of the disk group to the node you specify:
# scadmin startcluster physicalhost clustername |
If the cluster already exists and the node is in the cluster, use the following command to ensure ownership of the disk group:
# haswitch physicalhost logicalhost |
Configure the disk devices for use by your volume manager.
While Oracle supports raw I/O to both raw physical devices and raw metadevices (mirrored or nonmirrored), Sun Cluster only supports raw Oracle I/O on raw mirrored volumes or metadevices. You cannot use devices such as /dev/rdsk/c1t1d1s2 to contain Oracle data under Sun Cluster.
If you are using Solstice DiskSuite, set up UFS logs or raw mirrored metadevices on all nodes that will be running Sun Cluster HA for Oracle.
If you will be using raw mirrored metadevices to contain the databases, change the owner, group, and mode of each of the raw mirrored metadevices. If you are not using raw mirrored metadevices, skip this step. Instructions for creating mirrored metadevices are provided in Appendix B, Configuring Solstice DiskSuite.
If you are creating raw mirrored metadevices, type the following commands for each metadevice:
# chown oracle_id /dev/md/disk_group/rdsk/dn # chgrp dba_id /dev/md/disk_group/rdsk/dn # chmod 600 /dev/md/disk_group/rdsk/dn |
If you are using VERITAS Volume Manager, set up VxFS logs or raw devices on all nodes.
If you will be using raw devices to contain the databases, change the owner, group, and mode of each device. If you are not using raw devices, skip this step. Refer to your VERITAS Volume Manager documentation for information on setting up VxFS logs.
If you are creating raw devices, type the following command for each raw device:
# vxvol set owner=oracle_id group=dba_id mode=600 \ /dev/vx/rdsk/diskgroup_name/volume_name |
These are the high-level steps to create an Oracle database:
Preparing the database configuration files
Creating the database
Creating the v$sysstat view
These are the detailed steps to create an Oracle database.
Prepare database configuration files.
Place all parameter files, data files, redolog files, and control files on the logical host, that is, the disk group's multihost disks.
Within the init$ORACLE_SID.ora or config$ORACLE_SID.ora file, you might need to modify the assignments for control_files and background_dump_dest to specify the location of control files and alert files on the logical host.
If you are using Solaris authentication for database logins, the remote_os_authent variable in the init$ORACLE_SID.ora file must be set to TRUE. Full path names must be provided for background_dump_dest. The special character ? for specifying ORACLE_HOME should not be used.
Create the database.
During creation, ensure that all configuration and database files are placed on the logical hosts.
Start the Oracle installer (orainst or runInstaller) and select the Create New Database Objects option.
Alternatively, you can create the database using the Oracle svrmgrl command, depending on your Oracle version. For Oracle8i or later installations, you can create databases later, using the Oracle dbassist utility. See your Oracle8i documentation for details.
During the Oracle install session, place all the database files on the logical hosts.
Override the default file locations provided by the Oracle installer.
Verify that the file names of your control files match the file names in your configuration files.
Verify that the v$sysstat view exists.
If v$sysstat does not exist already, create it by running the appropriate Oracle catalog scripts. The v$sysstat view is used by the Sun Cluster fault monitoring scripts.
These are the high-level steps to set up Sun Cluster HA for Oracle:
Making entries for all of the database instances in the /var/opt/oracle/oratab files on all nodes running Sun Cluster HA for Oracle
Enabling user and password for fault monitoring and, optionally, granting permission for the database to use Solaris authentication
Configuring the Oracle listener to monitor Sun Cluster HA for Oracle
Verifying installation of the Oracle listener and Sun Cluster software
Activating the Oracle data service by using the hareg(1M) command
Bringing the Oracle database instance into service
These are the detailed steps to set up Sun Cluster HA for Oracle.
Make SID entries for the Sun Cluster HA for Oracle databases of all database instances.
You must include the SID of the instance associated with your database in the /var/opt/oracle/oratab file on all nodes running Sun Cluster HA for Oracle. You must keep this file current on all nodes running Sun Cluster HA for Oracle for a failover to succeed. Update the file manually as are you add or remove a SID. If the oratab files do not match, an error message will be returned and the haoracle start command will fail.
All entries in the /var/opt/oracle/oratab file should have the :N option specified to ensure that the instance will not start automatically on Solaris reboot. For example:
oracle_sid:/oracle:N |
Depending on which authentication method you choose (Oracle authentication or Solaris authentication), perform one of the following steps.
Enable access for the user and password to be used for fault monitoring.
You must complete this step if you do not enable Solaris authentication, as described in Step b.
In the following example, the user is scott and the password is tiger. Note that if you are using Oracle authentication, the user and password pair must agree with those used in Step 1.
For all supported Oracle releases, enable access by becoming the oracle user and typing the following script into the screen brought up by the srvmgrl(1M) command.
# svrmgrl connect internal; grant connect, resource to scott identified by tiger; alter user scott default tablespace system quota 1m on system; grant select on v_$sysstat to scott; grant create session to scott; grant create table to scott; disconnect; exit; |
Grant permission for the database to use Solaris authentication.
The following sample entry enables Solaris authentication
# svrmgrl connect internal; create user ops$root identified by externally default tablespace system quota 1m on system; grant connect, resource to ops$root; grant select on v_$sysstat to ops$root; grant create session to ops$root; grant create table to ops$root; disconnect; exit; |
Configure the listener.ora and tnsnames.ora files for Sun Cluster.
Create and update the tnsnames.ora and listener.ora files under /var/opt/oracle on all nodes.
Sun Cluster HA for Oracle imposes no restrictions on the listener name--it can be any valid Oracle listener name.
The following code sample identifies the lines in listener.ora that are updated.
LISTENER = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = hostname) <- use logical host name (PORT = 1527) ) ) . . SID_LIST_LISTENER = . . (SID_NAME = SID) <- Database name, default is ORCL |
The following sample identifies the lines in tnsnames.ora that are updated.
service_name = . . (ADDRESS = <- listener address (HOST = server_name) <- logical host name (PORT = 1527) <-- must match port in LISTENER.ORA ) ) (CONNECT_DATA = (SID = <SID>)) <-- database name, default is ORCL |
Sun Cluster HA for Oracle opens the /var/opt/oracle/tnsnames.ora file. It scans the file to service name by matching SID = instance name and server name=logical host. The service_name obtained from tnsnames.ora is used by the Sun Cluster HA for Oracle remote fault monitor to connect to the server.
Put the logical host name in place of the host name in the listener.ora file in the "HOST = host name" line.
The following example shows how to update the listener.ora and tnsnames.ora files given the Oracle instances listed in Table 5-1.
Table 5-1 Example Oracle Configuration
Instance |
Logical Host |
Listener |
---|---|---|
ora8 |
hadbms3 |
LISTENER-ora8 |
ora7 |
hadbms4 |
LISTENER-ora7 |
The following is a sample listener.ora file. See the Oracle documentation for your version of Oracle to configure these files.
LISTENER-ora7 = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = hadbms4) (PORT = 1530) ) ) SID_LIST_LISTENER-ora7 = (SID_LIST = (SID_DESC = (SID_NAME = ora7) ) ) LISTENER-ora8 = (ADDRESS_LIST = (ADDRESS= (PROTOCOL=TCP) (HOST=hadbms3)(PORT=1806)) ) SID_LIST_LISTENER-ora8 = (SID_LIST = (SID_DESC = (SID_NAME = ora8) ) ) |
The following is a sample tnsnames.ora file. See the Oracle documentation for your version of Oracle to configure these files.
ora8 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = hadbms3) (PORT = 1806)) ) (CONNECT_DATA = (SID = ora8)) ) ora7 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = hadbms4) (PORT = 1530)) ) (CONNECT_DATA = ( SID = ora7)) ) |
Verify that Sun Cluster and the cluster daemon are installed and running on all hosts.
# hastat |
If they are not running already, start them by using the scadmin startnode command.
From only one cluster node, register and activate Sun Cluster HA for Oracle by using the hareg(1M) command.
Run the hareg(1M) command on only one cluster node.
If the Oracle server is not yet registered, use the hareg(1M) command to register it. Run this command from only one cluster node. To register the data service only on one logical host, use the -h option and supply the logical host name:
# hareg -s -r oracle [-h logicalhost] |
If the cluster is running already, use the hareg(1M) command to activate the Oracle data service. Run this command from only one cluster node:
# hareg -y oracle |
Setup Sun Cluster HA for Oracle configuration data.
Run the following command so that the instance will be monitored by Sun Cluster:
# haoracle insert $ORACLE_SID logicalhost 60 10 120 300 \ user/password /logicalhost/.../init$ORACLE_SID.ora listener |
The previous command line includes the following:
haoracle insert - Command and subcommand
$ORACLE_SID - Name of the Oracle database instance
logicalhost - Logical host serving $ORACLE_SID (not the physical host)
60 10 120 300 - These parameters specify a probe cycle time of 60 seconds, a connectivity probe cycle count of 10, a probe time out of 120 seconds, and a restart delay of 300 seconds.
user/password - These are the user and password to be used for fault monitoring. They must agree with the permission levels granted in Step 2. To use Solaris authentication, enter a slash (/) instead of the user name and password.
/logicalhost/.../init$ORACLE_SID.ora - This indicates the pfile to use to start up the database. This must be on a logical host's disk group.
listener - The Oracle listener. The listener is started and monitored using this name. The default is LISTENER. This field is optional.
See the haoracle(1M) man page for details on all options to haoracle(1M).
Bring the database instance into service.
Bring the Sun Cluster HA for Oracle database into service by executing the haoracle(1M) command. Monitoring for that instance will start automatically.
# haoracle start $ORACLE_SID |