Sun Cluster Data Service for Siebel Guide for Solaris OS

Installing and Configuring Sun Cluster HA for Siebel

This chapter explains how to install and configure Sun Cluster HA for Siebel.

This chapter contains the following sections.

Sun Cluster HA for Siebel Overview

Sun Cluster HA for Siebel provides fault monitoring and automatic failover for the Siebel application. High availability is provided for the Siebel gateway and Siebel server. With a Siebel implementation, any physical node running the Sun Cluster agent cannot be running the Resonate agent as well. Resonate and Sun Cluster can coexist within the same Siebel enterprise, but not on the same physical server.

For conceptual information about failover services, see the Sun Cluster Concepts Guide for Solaris OS.

Table 1 Protection of Siebel Components

SiebelComponent 

Protected by 

Siebel gateway 

Sun Cluster HA for Siebel 

The resource type is SUNW.sblgtwy.

Siebelserver 

 

Sun Cluster HA for Siebel 

The resource type is SUNW.sblsrvr.

Installing and Configuring Sun Cluster HA for Siebel

Table 2 lists the tasks for installing and configuring Sun Cluster HA for Siebel. Perform these tasks in the order that they are listed.

Table 2 Task Map: Installing and Configuring Sun Cluster HA for Siebel

Task 

Instructions 

Plan the Siebel installation 

Planning the Sun Cluster HA for Siebel Installation and Configuration

Prepare the nodes and disks 

How to Prepare the Nodes

Install and configure Siebel 

How to Install the Siebel Gateway on the Global File System

How to Install the Siebel Gateway on Local Disks of Physical Hosts

How to Install the Siebel Server and Siebel Database on the Global File System

How to Install the Siebel Server and Siebel Database on Local Disks of Physical Hosts

Verify Siebel installation and configuration 

How to Verify the Siebel Installation and Configuration

Install Sun Cluster HA for Siebel packages 

Installing the Sun Cluster HA for Siebel Packages

Register and configure Sun Cluster HA for Siebel as a failover data service 

How to Register and Configure Sun Cluster HA for Siebel as a Failover Data Service

How to Register and Configure the Siebel Server

Verify Sun Cluster HA for Siebel installation and configuration 

How to Verify the Sun Cluster HA for Siebel Installation and Configuration

Maintain Sun Cluster HA for Siebel 

Maintaining Sun Cluster HA for Siebel

Tune the Sun Cluster HA for Siebel fault monitors 

Tuning the Sun Cluster HA for Siebel Fault Monitors

Planning the Sun Cluster HA for Siebel Installation and Configuration

This section contains the information you need to plan your Sun Cluster HA for Siebel installation and configuration.

Configuration Restrictions


Caution – Caution –

Your data service configuration might not be supported if you do not observe these restrictions.


Use the restrictions in this section to plan the installation and configuration of Sun Cluster HA for Siebel. This section provides a list of software and hardware configuration restrictions that apply to Sun Cluster HA for Siebel.

For restrictions that apply to all data services, see the release notes for your release of Sun Cluster.

Configuration Requirements


Caution – Caution –

Your data service configuration might not be supported if you do not adhere to these requirements.


Use the requirements in this section to plan the installation and configuration of Sun Cluster HA for Siebel. These requirements apply to Sun Cluster HA for Siebel only. You must meet these requirements before you proceed with your Sun Cluster HA for Siebel installation and configuration.

For requirements that apply to all data services, see Configuration Guidelines for Sun Cluster Data Services in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Standard Data Service Configurations

Use the standard configuration in this section to plan the installation and configuration of Sun Cluster HA for Siebel. Sun Cluster HA for Siebel supports the standard configuration in this section. Sun Cluster HA for Siebel might support additional configurations. However, you must contact your Sun service provider for information on additional configurations.

Figure 1 illustrates a possible configuration using Sun Cluster HA for Siebel. The Siebel server and the Siebel gateway are configured as failover data services.

Figure 1 Standard Siebel Configuration

Illustration: The preceding context describes the graphic.

Configuration Planning Questions

Use the questions in this section to plan the installation and configuration of Sun Cluster HA for Siebel. Insert the answers to these questions into the data service worksheets in Appendix C, Data Service Configuration Worksheets and Examples, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Preparing the Nodes and Disks

This section contains the procedures you need to prepare the nodes and disks.

ProcedureHow to Prepare the Nodes

Use this procedure to prepare for the installation and configuration of Siebel.

Steps
  1. Become super user on all of the nodes.

  2. Configure the /etc/nsswitch.conf file so that Sun Cluster HA for Siebel starts and stops correctly if a switchover or a failover occurs.

    On each node that can master the logical host that runs Sun Cluster HA for Siebel, include the following entries in the /etc/nsswitch.conf file.

    passwd:    files nis [TRYAGAIN=0]
    publickey: files nis [TRYAGAIN=0]
    project:   files nis [TRYAGAIN=0]
    group:     files

    Sun Cluster HA for Siebel uses the su - user command to start, stop, and probe the service.

    The network information name service might become unavailable when a cluster node's public network fails. Adding the preceding entries ensures that the su(1M) command does not refer to the NIS/NIS+ name services if the network information name service is unavailable.

  3. Prevent the Siebel gateway probe from timing out while trying to open a file on /home.

    When the node running the Siebel gateway has a path beginning with /home, which depends on network resources such as NFS and NIS, and the public network fails, the Siebel gateway probe times out and causes the Siebel gateway resource to go offline. Without the public network, Siebel gateway probe hangs while trying to open a file on /home, causing the probe to time out.

    To prevent the Siebel gateway probe from timing out while trying to open a file on /home, configure all nodes of the cluster that can be the Siebel gateway as follows:

    1. Eliminate all NFS or NIS dependencies for any path starting with /home.

      You may either have a locally mounted/home path or rename the /home mount point to /export/home or another name which does not start with /home.

    2. Comment out the line containing +auto_master in the /etc/auto_master file, and change any /home entries to auto_home.

    3. Comment out the line containing +auto_home in the /etc/auto_home file.

  4. Prepare the Siebel administrator's home directory.

  5. On each node, create an entry for the Siebel administrator group in the /etc/group file, and add potential users to the group.


    Tip –

    In the following example, the Siebel administrator group is named siebel.


    Ensure that group IDs are the same on all of the nodes that run Sun Cluster HA for Siebel.

    siebel:*:521:siebel
    

    You can create group entries in a network name service. If you do so, also add your entries to the local /etc/inet/hosts file to eliminate dependency on the network name service.

  6. On each node, create an entry for the Siebel administrator.


    Tip –

    In the following example, the Siebel administrator is named siebel.


    The following command updates the /etc/passwd and /etc/shadow files with an entry for the Siebel administrator.


    # useradd -u 121 -g siebel -s /bin/ksh -d /Siebel-home siebel
    

    Ensure that the Siebel user entry is the same on all of the nodes that run Sun Cluster HA for Siebel.

  7. Ensure that the Siebel administrator's default environment contains settings for accessing the Siebel database. For example, if the Siebel database is on Oracle, the following entries may be included in the .profile file.


    export ORACLE_HOME=/global/oracle/OraHome
    export PATH=$PATH:$ORACLE_HOME/bin
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib
    export TNS_ADMIN=$ORACLE_HOME/network/admin
    export ORACLE_SID=siebeldb
    
  8. Create a failover resource group to hold the logical hostname and the Siebel gateway resources.


    # scrgadm -a -g failover-rg [-h nodelist]
    
  9. Add the logical hostname resource.

    Ensure that logical hostname matches the value of the SIEBEL_GATEWAY environment variable that is set in the siebenv.sh file of the Siebel gateway, and also the Siebel server installations.


    # scrgadm -a -L -g failover-rg -l logical_hostname
    
  10. Bring the resource group online.


    # scswitch -Z -g failover-rg
    
  11. Repeat Step 8 through Step 10 for each logical hostname that is required.

Installing and Configuring the Siebel Application

This section contains the procedures you need to install and configure the Siebel application. To install the Siebel application, you must install the Siebel gateway, the Siebel server, and the Siebel database.

To install the Siebel application, you need the following information about your configuration.

To install the Siebel application, see the following sections.

Installing the Siebel Gateway

You can install the Siebel gateway either on the global file system or on local disks of physical hosts. To install the Siebel gateway, see one of the following procedures.

ProcedureHow to Install the Siebel Gateway on the Global File System

Use this procedure to install the Siebel gateway on the global file system. To install the Siebel gateway on local disks of physical hosts, see How to Install the Siebel Gateway on Local Disks of Physical Hosts.

To install the Siebel gateway on the global file system, install the Siebel software only once from any node of the cluster.

Steps
  1. Install the Siebel gateway by following the instructions in the Siebel installation documentation and the latest release notes.

    Do not use the Autostart feature. When prompted, configure Autostart=NO.

  2. Verify that the siebenv.sh file is under gateway_root, and is owned by the user who will launch the Siebel gateway.

  3. In the home directory of the user who will launch the Siebel gateway, create an empty file that is named .hushlogin.

    The .hushlogin file prevents failure of a cluster node's public network from causing an attempt to start, stop, or probe the service to time out.

  4. Change the SIEBEL_GATEWAY to the logical hostname that is selected for the Siebel gateway in siebenv.sh and siebenv.csh files under gateway_root.

  5. Stop and restart the Siebel gateway to ensure that the gateway is using the logical hostname.

ProcedureHow to Install the Siebel Gateway on Local Disks of Physical Hosts

Use this procedure to install the Siebel gateway on local disks of physical hosts. To install the Siebel gateway on the global file system, see How to Install the Siebel Gateway on the Global File System.


Note –

To install the Siebel gateway on local disks of physical hosts, the directory gateway_root/sys must be highly available (it must be installed on a global file system).


Steps
  1. Install the Siebel gateway on any one node of the cluster by following the instructions in the Siebel installation documentation and the latest release notes.

    Do not use the Autostart feature. When prompted, configure Autostart=NO.

  2. Verify that the siebenv.sh file is under gateway_root, and is owned by the user who will launch the Siebel gateway.

  3. In the home directory of the user who will launch the Siebel gateway, create an empty file that is named .hushlogin.

    The .hushlogin file prevents failure of a cluster node's public network from causing an attempt to start, stop, or probe the service to time out.

  4. Change the SIEBEL_GATEWAY to the logical hostname that is selected for the gateway in siebenv.sh and siebenv.csh files under gateway_root.

  5. Stop and restart the Siebel gateway to ensure that the gateway is using the logical hostname.

  6. Move gateway_root/sys to /global/siebel/sys and create a link to the global file system from the local file system.


    # mv gateway_root/sys /global/siebel/sys
    # ln -s /global/siebel/sys gateway_root/sys
    
  7. Replicate the installation on all remaining nodes of the cluster.


    # rdist -c gateway_root hostname:gateway_root
    
  8. Verify that the ownerships and permissions of the files and directories in the Siebel gateway installation are identical on all nodes of the cluster.

  9. For each node on the cluster, change the ownership of the link to the appropriate Siebel user.


    # chown -h siebel:siebel gateway_root/sys
    
  10. As Siebel user, verify that the gateway is properly installed and configured. Ensure the command below returns a version string.


    $ srvredit -q -g SIEBEL_GATEWAY -e none -z -c '$Gateway.VersionString'
    

Installing the Siebel Server and Siebel Database

You can install the Siebel server either on the global file system or on local disks of physical hosts.


Note –

If more than one Siebel server will use the Siebel Filesystem, you must install the Siebel Filesystem on a global file system.


To install the Siebel server and configure the Siebel server and Siebel database , see one of the following procedures

ProcedureHow to Install the Siebel Server and Siebel Database on the Global File System

Use this procedure to install the Siebel server and configure the Siebel server and Siebel database on the global file system. To install the Siebel server on local disks of physical hosts, see How to Install the Siebel Server and Siebel Database on Local Disks of Physical Hosts.

To install the Siebel server on the global file system, install the software only once from any node of the cluster.

Steps
  1. Install the Siebel server by following the instructions in the Siebel installation documentation and the latest release notes.

    Do not use the Autostart feature. When prompted, configure Autostart=No.

    When prompted to enter the gateway hostname, enter the logical hostname for the Siebel gateway.

  2. Verify that the siebenv.sh file is under server_root and is owned by the user who will launch the Siebel server.

  3. In the home directory of the user who will launch the Siebel server, create an empty file that is named .hushlogin.

    The .hushlogin file prevents failure of a cluster node's public network from causing an attempt to start, stop, or probe the service to time out.

  4. Ensure that a database such as HA Oracle is configured for Siebel and that the database is online.

  5. Use the Siebel documentation to configure and populate the Siebel database.

    When creating the ODBC data source (using dbsrvr_config.ksh script), ensure that the name is siebsrvr_siebel_enterprise.

  6. Create a database user (for example, dbuser/dbpassword) with permission to connect to the Siebel database for use by the Sun Cluster HA for Siebel Fault Monitor.

  7. Log in as the user who will launch the Siebel server and manually start the Siebel server.

  8. Run srvrmgr to change the configuration of Siebel server to enable Siebel server to run in a cluster.

    • If you are using Siebel 7.7, change the ServerHostAddress parameter to the IP address of the Siebel server's logical host name resource.


      $ srvrmgr:hasiebel> change param ServerHostAddress=lhaddr for server hasiebel
      
    • If you are using a version of Siebel earlier than 7.7, change the HOST parameter to the logical hostname for the Siebel server.


      $ srvrmgr:hasiebel> change param Host=lhname for server hasiebel
      

    Note –

    These changes take effect when the Siebel server is started under Sun Cluster control.


ProcedureHow to Install the Siebel Server and Siebel Database on Local Disks of Physical Hosts

Use this procedure to install the Siebel server and configure the Siebel server and Siebel database on local disks of physical hosts. To install the Siebel server on the global file system, see How to Install the Siebel Server and Siebel Database on the Global File System.

To install the Siebel server on the local disks of the physical hosts, install the software on any one node of the cluster.

Steps
  1. Install the Siebel server by following the instructions in the Siebel installation documentation and the latest release notes.

    Do not use the Autostart feature. When prompted, configure Autostart=No.

    When prompted to enter the gateway hostname, enter the logical hostname for the Siebel gateway.

  2. Verify that the siebenv.sh file is under server_root and is owned by the user who will launch the Siebel server.

  3. In the home directory of the user who will launch the Siebel server, create an empty file that is named .hushlogin.

    The .hushlogin file prevents failure of a cluster node's public network from causing an attempt to start, stop, or probe the service to time out.

  4. Ensure that a database such as HA Oracle is configured for Siebel and that the database is online.

  5. Use the Siebel documentation to configure and populate the Siebel database.

    When creating the ODBC data source (using dbsrvr_config.ksh script), ensure that the name is siebsrvr_siebel_enterprise.

  6. Create a database user (for example, dbuser/dbpassword) with permission to connect to the Siebel database for use by the Sun Cluster HA for Siebel Fault Monitor.

  7. Log in as the user who will launch the Siebel server and manually start the Siebel server.

  8. Run srvrmgr to change the configuration of Siebel server to enable Siebel server to run in a cluster.

    • If you are using Siebel 7.7, change the ServerHostAddress parameter to the IP address of the Siebel server's logical host name resource.


      $ srvrmgr:hasiebel> change param ServerHostAddress=lhaddr for server hasiebel
      
    • If you are using a version of Siebel earlier than 7.7, change the HOST parameter to the logical hostname for the Siebel server.


      $ srvrmgr:hasiebel> change param Host=lhname for server hasiebel
      

    Note –

    These changes take effect when the Siebel server is started under Sun Cluster control.


  9. Replicate the installation on all of the remaining nodes of the cluster.


    # rdist -c server_root hostname:server_root
    
  10. Verify that the ownerships and permissions of files and directories in the Siebel gateway installation are identical on all nodes of the cluster.

Verifying the Siebel Installation and Configuration

This section contains the procedure you need to verify the Siebel installation and configuration.

ProcedureHow to Verify the Siebel Installation and Configuration

Use this procedure to verify the Siebel gateway, Siebel server, and Siebel database installation and configuration. This procedure does not verify that your application is highly available because you have not installed your data service yet.

Steps
  1. Verify that the logical hostname is online on the node on which the resource(s) will be brought online.

  2. Manually start the Siebel gateway as the user who will launch the Siebel gateway.

  3. Manually start the Siebel server as the user who will launch the Siebel server.

  4. Use odbcsql to verify connectivity to the Siebel database.


    # odbcsql /s siebsrvr_siebel_enterprise /u dbuser /p dbpassword
    
  5. Run list servers subcommand under srvrmgr.

    Before the Siebel server is configured to be highly available, the HOST_NAME parameter for the Siebel server shows the physical host name.

    After the Siebel server is configured to be highly available, the output from this command depends on the version of Siebel that you are using.

    • If you are using Siebel 7.7, the HOST_NAME parameter for the Siebel server shows the physical host name of the node where Siebel server is running. Therefore, running this command at different times might show different names, depending on whether the Siebel server resource has failed over or has been switched over.

    • If you are using a version of Siebel earlier than 7.7, the HOST_NAME parameter for the Siebel server shows the logical host name.

  6. If you are using Siebel 7.7, confirm that the serverhostaddress parameter is set to the IP address of the Siebel server's logical host name resource.


    $ srvrmgr:hasiebel> list advanced param serverhostaddress
    
  7. Test various Siebel user sessions, such as sales and call center using a Siebel dedicated client and supported thin client (browser).

  8. Manually stop the Siebel server as the user who started the Siebel server.

  9. Manually stop the Siebel gateway as the user who started the Siebel gateway.

Installing the Sun Cluster HA for Siebel Packages

If you did not install the Sun Cluster HA for Siebel packages during your initial Sun Cluster installation, perform this procedure to install the packages. Perform this procedure on each cluster node where you are installing the Sun Cluster HA for Siebel packages. To complete this procedure, you need the Sun Cluster Agents CD-ROM.

If you are installing more than one data service simultaneously, perform the procedure in Installing the Software in Sun Cluster Software Installation Guide for Solaris OS.

Install the Sun Cluster HA for Siebel packages by using one of the following installation tools:


Note –

If you are using Solaris 10, install these packages only in the global zone. To ensure that these packages are not propagated to any local zones that are created after you install the packages, use the scinstall utility to install these packages. Do not use the Web Start program.


ProcedureHow to Install the Sun Cluster HA for Siebel Packages by Using the Web Start Program

You can run the Web Start program with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar. For more information about the Web Start program, see the installer(1M) man page.

Steps
  1. On the cluster node where you are installing the Sun Cluster HA for Siebel packages, become superuser.

  2. (Optional) If you intend to run the Web Start program with a GUI, ensure that your DISPLAY environment variable is set.

  3. Insert the Sun Cluster Agents CD-ROM into the CD-ROM drive.

    If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/cdrom0 directory.

  4. Change to the Sun Cluster HA for Siebel component directory of the CD-ROM.

    The Web Start program for the Sun Cluster HA for Siebel data service resides in this directory.


    # cd /cdrom/cdrom0/components/SunCluster_HA_Siebel_3.1/
    
  5. Start the Web Start program.


    # ./installer
    
  6. When you are prompted, select the type of installation.

    • To install only the C locale, select Typical.

    • To install other locales, select Custom.

  7. Follow the instructions on the screen to install the Sun Cluster HA for Siebel packages on the node.

    After the installation is finished, the Web Start program provides an installation summary. This summary enables you to view logs that the Web Start program created during the installation. These logs are located in the /var/sadm/install/logs directory.

  8. Exit the Web Start program.

  9. Remove the Sun Cluster Agents CD-ROM from the CD-ROM drive.

    1. To ensure that the CD-ROM is not being used, change to a directory that does not reside on the CD-ROM.

    2. Eject the CD-ROM.


      # eject cdrom
      

ProcedureHow to Install the Sun Cluster HA for Siebel Packages by Using the scinstall Utility

Steps
  1. Load the Sun Cluster Agents CD-ROM into the CD-ROM drive.

  2. Run the scinstall utility with no options.

    This step starts the scinstall utility in interactive mode.

  3. Choose the menu option, Add Support for New Data Service to This Cluster Node.

    The scinstall utility prompts you for additional information.

  4. Provide the path to the Sun Cluster Agents CD-ROM.

    The utility refers to the CD-ROM as the “data services cd.”

  5. Specify the data service to install.

    The scinstall utility lists the data service that you selected and asks you to confirm your choice.

  6. Exit the scinstall utility.

  7. Unload the CD-ROM from the drive.

Registering and Configuring Sun Cluster HA for Siebel

This section contains the procedures you need to configure Sun Cluster HA for Siebel.

Setting Sun Cluster HA for Siebel Extension Properties

The sections that follow contain instructions for registering and configuring resources. These instructions explain how to set only extension properties that Sun Cluster HA for Siebel requires you to set. For information about all Sun Cluster HA for Siebel extension properties, see Appendix A, Sun Cluster HA for Siebel Extension Properties. You can update some extension properties dynamically. You can update other properties, however, only when you create or disable a resource. The Tunable entry indicates when you can update a property.

To set an extension property of a resource, include the following option in the scrgadm(1M) command that creates or modifies the resource:


-x property=value 
-x property

Identifies the extension property that you are setting

value

Specifies the value to which you are setting the extension property

You can also use the procedures in Chapter 2, Administering Data Service Resources, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS to configure resources after the resources are created.

ProcedureHow to Register and Configure Sun Cluster HA for Siebel as a Failover Data Service

Use this procedure to configure Sun Cluster HA for Siebel as a failover data service. This procedure assumes that the data service packages are already installed. If the Sun Cluster HA for Siebel packages are not already installed, see Installing the Sun Cluster HA for Siebel Packages to install the packages. Otherwise, use this procedure to configure the Sun Cluster HA for Siebel.

Steps
  1. Become superuser on one of the nodes in the cluster that hosts the application server.

  2. Add the resource type for the Siebel gateway.


    # scrgadm -a -t SUNW.sblgtwy
    
  3. Create a failover resource group to hold the logical hostname and the Siebel gateway resources.


    Note –

    If you have already created a resource group, added the logical hostname resource, and brought the resource group online when you completed the How to Prepare the Nodes procedure, you may skip to Step 6.



    # scrgadm -a -g gateway-rg [-h nodelist]
    
  4. Add the logical hostname resource.

    Ensure that logical hostname matches the value of the SIEBEL_GATEWAY environment variable that is set in the siebenv.sh file of the Siebel gateway, and also the Siebel server installations.


    # scrgadm -a -L -g gateway-rg -l logical_hostname
    
  5. Bring the resource group online.


    # scswitch -Z -g gateway-rg
    
  6. Verify that siebenv.sh file exists under gateway_root.

    The owner of this file launches the Siebel gateway server when the Siebel gateway resource is brought online.

  7. Create the Siebel gateway resource.


    # scrgadm -a -j sblgtwy-rs -g gateway-rg \
    -t SUNW.sblgtwy  \
    -x Confdir_list=gateway_root
    
  8. Enable the Siebel gateway resource.


    # scswitch -e -j sblgtwy-rs
    
  9. Verify that the Siebel resource group and the Siebel gateway resource are online by using scstat —g and ps —ef.

ProcedureHow to Register and Configure the Siebel Server

Steps
  1. Add the resource type for the Siebel server.


    # scrgadm -a -t SUNW.sblsrvr
    
  2. Create the failover resource group to hold the logical hostname and the Siebel server resources.


    Note –

    If you have already created a resource group, added the logical hostname resource, and brought the resource group online when you completed the How to Prepare the Nodes procedure, you may skip to Step 5.



    # scrgadm -a -g siebel-rg [-h nodelist]
    
  3. Add the logical hostname resource.

    This logical hostname should match the value of the HOST_NAME parameter for the Siebel server.


    # scrgadm -a -L -g siebel-rg -l logical-hostname
    
  4. Bring the resource group online.

    The following command brings the resource group online on the preferred node.


    # scswitch -Z -g siebel-rg
    
  5. Verify that the siebenv.sh file is located under server_root.

  6. Create a file called scsblconfig under server_root , owned by the owner of siebenv.sh.

    If the Siebel server is installed locally, create the file scsblconfig under server_root on all nodes.

    For security reasons, make this file readable only by the owner.


    # cd server_root
    # touch scsblconfig
    # chown siebel:siebel scsblconfig
    # chmod 400 scsblconfig
    
  7. Select a database user (for example, dbuser/dbuserpassword) with permission to connect to the database for use by the Sun Cluster HA for Siebel Fault Monitor.

  8. Select another Siebel user (for example, sadmin/sadminpassword) with permission to run the compgrps command in srvrmgr.

  9. Add the following entries into the sbsblconfig file.

    export DBUSR=dbuser
    export DBPWD=dbuserpassword
    export SADMUSR=sadmin
    export SADMPWD=sadminpassword
    
  10. Create the Siebel server resource.


    # scrgadm -a -j sblsrvr-rs -g siebel-rg \
    -t SUNW.sblsrvr \
    -x Confdir_list=server_root \
    -x siebel_enterprise=siebel enterprise name \
    -x siebel_server=siebel server name
    

    Caution – Caution –

    If you enter incorrect values for siebel_enterprise or siebel_server, you may not see any errors during validation. However, resource startup will fail. If siebel_enterprise is incorrect, validate method will not be able to verify database connectivity, which will result in a warning only.


  11. Enable the Siebel server resource.


    # scswitch -e -j sblsrvr-rs
    
  12. Verify that the resource group and the Siebel server resource are online, by using scstat –g and ps –ef commands.

Verifying the Sun Cluster HA for Siebel Installation and Configuration

This section contains the procedure you need to verify that you installed and configured your data service correctly.

ProcedureHow to Verify the Sun Cluster HA for Siebel Installation and Configuration

Use this procedure to verify that you installed and configured Sun Cluster HA for Siebel correctly.

Steps
  1. Bring the Siebel database, Siebel gateway, and Siebel server resources online on the cluster.

  2. Log in to the node on which the Siebel server is online.

  3. Confirm that the fault monitor functionality is working correctly.

  4. Start srvrmgr and run the subcommand list compgrps.

  5. Verify that the required Siebel components are enabled.

  6. Connect to Siebel using a supported thin-client (browser) and run a session.

  7. As user root, switch the Siebel server resource group to another node.


    # scswitch -z -g siebel-rg -h node2
    
  8. Repeat Step 4, Step 5, and Step 6 for each potential node on which the Siebel server resource can run.

  9. As root user, switch the Siebel gateway resource group to another node.


    # scswitch -z -g gateway-rg -h node2
    

Maintaining Sun Cluster HA for Siebel

This section contains guidelines for maintaining Sun Cluster HA for Siebel.


Caution – Caution –

If the Siebel server is started manually without disabling the resource or bringing the resource group to an unmanaged state, the Siebel resource start method might “reset” the service on the node where the resource is attempting to be started under Sun Cluster control. This may lead to unexpected results.


Tuning the Sun Cluster HA for Siebel Fault Monitors

Fault monitoring for the Sun Cluster HA for Siebel data service is provided by the following fault monitors:

Each fault monitor is contained in a resource whose resource type is shown in the following table.

Table 3 Resource Types for Sun Cluster HA for Siebel Fault Monitors

Fault Monitor 

Resource Type 

Siebel server 

SUNW.sblsrvr

Siebel gateway 

SUNW.sblgtwy

System properties and extension properties of these resources control the behavior of the fault monitors. The default values of these properties determine the preset behavior of the fault monitors. The preset behavior should be suitable for most Sun Cluster installations. Therefore, you should tune the Sun Cluster HA for Siebel fault monitors only if you need to modify this preset behavior.

Tuning the Sun Cluster HA for Siebel fault monitors involves the following tasks:

For more information, see Tuning Fault Monitors for Sun Cluster Data Services in Sun Cluster Data Services Planning and Administration Guide for Solaris OS. Information about theSun Cluster HA for Siebel fault monitors that you need to perform these tasks is provided in the subsections that follow.

Tune the Sun Cluster HA for Siebel fault monitors when you register and configure Sun Cluster HA for Siebel. For more information, see Registering and Configuring Sun Cluster HA for Siebel.

Operation of the Siebel Server Fault Monitor

During a probe, the Siebel server fault monitor tests for the correct operation of the following components:

Operation of the Siebel Gateway Fault Monitor

The Siebel gateway fault monitor monitors the Siebel gateway process. If the Siebel gateway process dies, the fault monitor restarts it, or fails it over to another node.