Bookshelf Home | Contents | Index | Search | PDF | ![]() ![]() ![]() ![]() |
Siebel Server Installation Guide for UNIX > Clustering Your Siebel Deployment for Failover >
Installing Siebel eBusiness Applications with VERITAS
VERITAS Cluster Server (VCS) lets you monitor systems and application services, and to restart services on a different system when hardware or software fails. A VCS cluster consists of multiple systems connected in various combinations to a shared storage device.
All systems within a cluster have the same cluster ID, and are connected by redundant private networks over which they communicate by heartbeats, or signals sent periodically from one system to another.
Applications can be configured to run on specific nodes within the cluster. Storage is configured to provide access to shared data for nodes hosting the application. The storage connectivity determines where applications are run. All nodes sharing access to storage are eligible to run an application. Clustering allows the logical Siebel Server to run on any physical node.
Before Siebel eBusiness Applications are installed, install and configure all appropriate VERITAS software. To install and configure the VERITAS agent, refer to the VERITAS Cluster Server Agent for Siebel Server Installation and Configuration Guide. For additional assistance with installing VERITAS software, refer to the VERITAS documentation or VERITAS professional service.
For detailed Siebel software installation steps, see Installing the Siebel Gateway and Installing the Siebel Server.
Using VERITAS Cluster Server Groups
Use a separate User ID for each VERITAS Cluster Server (VCS) service group. You need to determine the number and type of VCS service groups based on several factors. For example, one User ID or service group can contain both the Siebel Server and Siebel Gateway. Alternatively, there could be one service group for each Siebel Gateway and Siebel Server. This allows you to control switchover or failover at the Siebel Server or Siebel Gateway level instead at the machine level.
Selecting User Groups in a VERITAS Clustered Environment
Designate a Solaris user group for the Siebel VCS service groups. Do not use the Solaris other group.
Designate this group as the primary group for all Siebel and Sun ONE (iPlanet) users. Make sure that the default umask for all these users is 007 or 004.
If you test a file using the
touch
command, you should see permissions ofrw-rw----
orrw-rw-r--
. The default umask can be set in the user's profile or login script.Configuring Login Profiles in a VERITAS Clustered Environment
Verify that the login profiles of all Siebel User IDs do not reference any script or file that is available on only a specific node. It is recommended that any Siebel-specific environment variable settings and other related actions be implemented by scripts that are completely contained in the respective service group's filesystem.
Synchronize System Information in a VERITAS Clustered Environment
Verify that system information like User IDs and passwords,
/etc/hosts
,/etc/resolv.conf
,/etc/services
, and so on is synchronized among the nodes in one or more local clusters or VCS and on any Global Clusters or Global Cluster Manager. You usually have to configure this externally, outside of the VERITAS Cluster Manager.Assign Resources for Siebel Servers in a VERITAS Clustered Environment
Assign appropriate resources, including Disk Groups virtual IP addresses, for each of the logical Siebel Servers. Validate that the directory in
$
SIEBEL_ROOT
is owned by the corresponding Siebel User ID.Set Environmental Variables in a VERITAS Clustered Environment
Before installing the Siebel Gateway and Siebel Servers, you should set the
$
SIEBEL_ROOT
and$
SIEBEL_GATEWAY
environment variables in each Siebel Server User ID's login profile. The$
SIEBEL_ROOT
is the root directory where Siebel Server is installed. The$
SIEBEL_GATEWAY
variable should be the virtual IP address or hostname of the (corresponding) Gateway's service group.NOTE: You can speed installation if you set the
$
SIEBEL_GATEWAY
and$
SIEBEL_ROOT
environment variables before you begin installation. The installer will automatically install Siebel Gateway and Siebel Server in the designated locations. Otherwise you are prompted for the locations.Verify that the
LD_LIBRARY_PATH
,SHLIB_PATH
, andLIBATH
environment variables on each node are identical and contain all the necessary Siebel, RDBMS and OS library paths. The Siebel paths forLIBPATH
include the following:$SIEBEL_ROOT/lib
$SIEBEL_ROOT/mw/lib
$SIEBEL_ROOT/SYBSsa50
The environmental variables
LD_LIBRARY_PATH
,SHLIB_PATH
, andLIBATH
should be identical in value, that is, equal to the union ofLD_LIBRARY_PATH
,SHLIB_PATH
, andLIBATH
.Use Clear Naming Conventions in a VERITAS Clustered Environment
If you will be operating a heterogeneous server environment, use UNIX naming conventions for servers, and names that are no longer than 12 characters and do not contain spaces or special characters.
Use machine names for the Siebel Enterprises, Siebel Gateway and Siebel Servers that are cluster-independent and node-independent.
If you are using Siebel Remote, the physical node hostname on which it is running must remain unchanged at all times, even after a switchover or failover.
Configuring Service Groups in a VERITAS Clustered Environment
You can speed installation in a clustered environment if you can access the database, Web Server, Siebel Gateway, Siebel Server, and all application drives from a single machine. This lessens the necessity of switching from one machine to another, swapping CDs, and so forth.
First install all applications from that single machine and test them individually. Then you can use
su
(substitute user) and the username that owns the corresponding service group and Siebel Server to begin installation.Clustering requires each service group to have its own filesystem. The service group filesystem is supported by a volume on a disk group assigned to the service group.
You should place the
$
SIEBEL_ROOT
location in a subdirectory under the service group mount-point and not as the mount-point itself. This creates a directory tree structure that is uncluttered and easier to maintain.For example, suppose you have a service group named
sbl_srvr1
with mount point/sbl_srvr1
. Specify the$
SIEBEL_ROOT
directory as/sbl_srvr1/siebel and not /sbl_srvr1.
If you place multiple nodes within the cluster, place the Siebel File Server on its own service group or outside of any Siebel service groups. This will prevent it from being affected by any Siebel application service group switchover or failover.
After completing all installations, you should test standard functionality without the VERITAS Siebel Server agents. Then verify that each Siebel Server service group runs successfully on all its applicable nodes. Also validate that the Siebel Gateway and Siebel Servers are functional on each node of the cluster.
Check Database Connectivity in a VERITAS Clustered Environment
Verify that the appropriate RDBMS client software for each clustered Siebel Server is available on each physical node.
- Installing the RDBMS client software under each Siebel Server's file system
- Installing the RDBMS client at the same fixed directory on each node
Before beginning the Siebel software installation, verify that RDBMS connectivity works by using the utilities native to your RDBMS. After the installation of each Siebel Server, check database connectivity using the procedures described in Installing the Siebel Database Server for DB2 UDB or Installing the Siebel Database Server for Oracle as appropriate, to verify the ODBC connection.
NOTE: Siebel applications are designed to automatically reconnect to the database if the connection is temporarily lost. However, if the database failover and the reconnect mechanism does not successfully reconnect, you need to restart the application manually.
Clustering the Siebel Gateway in a VERITAS Clustered Environment
Use the Server Manager GUI and change the Gateway Name Server's Hostname parameter and IP Address to the Siebel Gateway Resource Group Hostname and IP Address respectively.
NOTE: When specifying the Gateway address during the installation of Siebel Servers, always use the Resource Group IP Address and Hostname.
Clustering the Database Server in a VERITAS Clustered Environment
If you intend to cluster your Siebel Database Server, follow the procedures provided by your database vendor.
Clustering the Siebel File System in a VERITAS Clustered Environment
The Siebel File System is a directory hierarchy used to store attachment files used by the Siebel application. It needs to be accessible from all Siebel Servers through a common name. If dedicated Web clients are also used and they need direct file system access, then all dedicated Web clients will need to be able to access the Siebel File System as a network share.
Configure the Web Agent in a VERITAS Clustered Environment
Install the VERITAS Agent for Siebel Server after the Siebel Server has been installed successfully and tested. Access the VCS Cluster Manager and modify, as needed, the following parameters for the VERITAS Agent for Siebel Server:
Attribute Value DescriptionPort
8088
8088
is the default; enter the port number on which the Web Server has been configured for Siebel eBusiness Applications.SvrSubDir
/web/iplanet/https-14067
1If you co-locate Siebel Servers on one machine under a single User ID and stop one of them using the
stop_server
script, the other server may be adversely affected due to a deallocation of shared memory. Use the-M
option when you executestop_server
to prevent the shared memory from being deallocated.Sun ONE (iPlanet) Web Server Installation in a Clustered Environment
There are no special requirements for installing the Sun ONE (iPlanet) Web Server for use with the SWSE in a cluster. The standard instructions given in Installing Siebel Web Server Extension and the Sun Cluster for Sun ONE (iPlanet) install guide should be followed.
If a Shared Install configuration is used, then the Sun ONE (iPlanet) Web Server should be installed on a shared global file system. If a Local Install configuration is used, then the Sun One (iPlanet) Web Server should be installed on a local file system on each server.
Once the Sun One (iPlanet) software is installed, it is recommended that a dedicated Web Server be created for each Siebel Enterprise. For a Local Install configuration, the same parameters should be used for each server. As described in the Sun Cluster Agent for Sun One (iPlanet) documentation, this should have the following parameters:
Change the ownership of all files associated with the Siebel Web Server to the siebel user/group using the following command as root:
# chown -R
SIEBEL USER
:SIEBEL GROUP SERVER_DIR
where:
SIEBEL_USER
is the user ID used to install the Siebel Web Server Extension (SWSE) on the Web Server.
SIEBEL_GROUP
is the group ID.
SERVER_DIR
is the directory where SWSE is installed.For example,
# chown -R siebusr:siebgrp /usr/netscape/server/https-siebel
Configuring Web Server Extension Port in a VERITAS Clustered Environment
While installing the Siebel Web Server Extensions, if you are using a nonstandard port, input the Web server port as part of the Web server address or hostname. For example, if the machine hostname is websrvr1 and the Web server port is
8088
, then during installation enter the Web server address aswebserver1:8088
.
Bookshelf Home | Contents | Index | Search | PDF | ![]() ![]() ![]() ![]() |
Siebel Server Installation Guide for UNIX Published: 24 June 2003 |