JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for Siebel Guide

    Oracle Solaris Cluster 3.3 3/13SPARC Platform Edition

search filter icon
search icon

Document Information

Preface

Installing and Configuring HA for Siebel

HA for Siebel Overview

Installing and Configuring HA for Siebel

Planning the HA for Siebel Installation and Configuration

Configuration Restrictions

Configuration Requirements

Standard Data Service Configurations

Configuration Planning Questions

Preparing the Nodes and Disks

How to Prepare the Nodes

Installing and Configuring the Siebel Application

Installing the Siebel Gateway

How to Install the Siebel Gateway on the Global File System

How to Install the Siebel Gateway on Local Disks of Physical Hosts

Installing the Siebel Server and Siebel Database

How to Install the Siebel Server and Siebel Database on the Global File System

How to Install the Siebel Server and Siebel Database on Local Disks of Physical Hosts

Verifying the Siebel Installation and Configuration

How to Verify the Siebel Installation and Configuration

Installing the HA for Siebel Packages

How to Install the HA for Siebel Packages

Registering and Configuring HA for Siebel

Setting HA for Siebel Extension Properties

How to Register and Configure HA for Siebel as a Failover Data Service

How to Register and Configure the Siebel Server

Verifying the HA for Siebel Installation and Configuration

How to Verify the HA for Siebel Installation and Configuration

Maintaining HA for Siebel

Tuning the HA for Siebel Fault Monitors

Operation of the Siebel Server Fault Monitor

Operation of the Siebel Gateway Fault Monitor

Upgrading the HA for Siebel Resource Types

How to Upgrade to a New Version of HA for Siebel

A.  HA for Siebel Extension Properties

Index

Preparing the Nodes and Disks

This section contains the procedures you need to prepare the nodes and disks.

How to Prepare the Nodes

Use this procedure to prepare for the installation and configuration of Siebel.

  1. Become super user on all of the nodes.
  2. Configure the /etc/nsswitch.conf file so that HA for Siebel starts and stops correctly if a switchover or a failover occurs.

    On each node that can master the logical host that runs HA for Siebel, include the following entries in the /etc/nsswitch.conf file.

    passwd:    files [NOTFOUND=return] nis [TRYAGAIN=0]
    publickey: files [NOTFOUND=return] nis [TRYAGAIN=0]
    project:   files [NOTFOUND=return] nis [TRYAGAIN=0]
    group:     files [NOTFOUND=return] nis [TRYAGAIN=0]

    HA for Siebel uses the su - user command to start, stop, and probe the service.

    The network information name service might become unavailable when a cluster node's public network fails. Adding the preceding entries ensures that the su(1M) command does not refer to the NIS/NIS+ name services if the network information name service is unavailable.

  3. Prevent the Siebel gateway probe from timing out while trying to open a file on /home.

    When the node running the Siebel gateway has a path beginning with /home, which depends on network resources such as NFS and NIS, and the public network fails, the Siebel gateway probe times out and causes the Siebel gateway resource to go offline. Without the public network, Siebel gateway probe hangs while trying to open a file on /home, causing the probe to time out.

    To prevent the Siebel gateway probe from timing out while trying to open a file on /home, configure all nodes of the cluster that can be the Siebel gateway as follows:

    1. Eliminate all NFS or NIS dependencies for any path starting with /home.

      You may either have a locally mounted/home path or rename the /home mount point to /export/home or another name which does not start with /home.

    2. Comment out the line containing +auto_master in the /etc/auto_master file, and change any /home entries to auto_home.
    3. Comment out the line containing +auto_home in the /etc/auto_home file.
  4. Prepare the Siebel administrator's home directory.
  5. On each node, create an entry for the Siebel administrator group in the /etc/group file, and add potential users to the group.

    Tip - In the following example, the Siebel administrator group is named siebel.


    Ensure that group IDs are the same on all of the nodes that run HA for Siebel.

    siebel:*:521:siebel

    You can create group entries in a network name service. If you do so, also add your entries to the local /etc/inet/hosts file to eliminate dependency on the network name service.

  6. On each node, create an entry for the Siebel administrator.

    Tip - In the following example, the Siebel administrator is named siebel.


    The following command updates the /etc/passwd and /etc/shadow files with an entry for the Siebel administrator.

    # useradd -u 121 -g siebel -s /bin/ksh -d /Siebel-home siebel

    Ensure that the Siebel user entry is the same on all of the nodes that run HA for Siebel.

  7. Ensure that the Siebel administrator's default environment contains settings for accessing the Siebel database. For example, if the Siebel database is on Oracle, the following entries may be included in the .profile file.
    export ORACLE_HOME=/global/oracle/OraHome
    export PATH=$PATH:$ORACLE_HOME/bin
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib
    export TNS_ADMIN=$ORACLE_HOME/network/admin
    export ORACLE_SID=siebeldb
  8. Create a failover resource group to hold the logical hostname and the Siebel gateway resources.
    # clresourcegroup create [-n node] failover-rg
    -n node

    Specifies the node name that can master this resource group.

    failover-rg

    Specifies your choice of the name of the failover resource group to add. This name must begin with an ASCII character.

  9. Add the logical hostname resource.

    Ensure that logical hostname matches the value of the SIEBEL_GATEWAY environment variable that is set in the siebenv.sh file of the Siebel gateway, and also the Siebel server installations.

    # clreslogicalhostname create -g failover-rg logical_host
    logical_host

    Specifies an optional resource name of your choice.

  10. Bring the resource group online.
    # clresourcegroup online -M failover-rg
  11. Repeat Step 8 through Step 10 for each logical hostname that is required.