NFS Recommendation for Siebel File System
The Siebel File System (SiebFS) is typically installed on a central
global host that shares it to connected clients. These connected clients could be Siebel
servers, dedicated ("thick" or remote) clients, a Siebel document server, and so on,
depending on the version and configuration.
To ensure that a file accessed by one client can't simultaneously be opened by another, the file system's internal lock mechanism is used. Therefore the host file server itself must implement a global locking concept. This file locking is implemented differently among various operating systems.
Siebel administrators and network architects who wish to successfully implement the Siebel File System, must consider the configuration prerequisites and implementation best practices when the host server is connected with NFS.
If the file server is hosted on Windows using the Server Message Block (SMB) protocol, then file locking is enabled by default and no extra steps need to be taken.
With UNIX and Linux, Samba can provide multi-platform SMB access for Siebel servers running on Windows.
However, in a pure UNIX deployment, the remote share is usually implemented using NFS and certain configuration steps should be verified.
The NFS servers must have the lockd and statd daemons
enabled, in addition to basic NFS daemons that implement mounting and accessing a share.
These locking daemons must be tuned for the number of threads on various platforms in
order to manage the high volume of concurrent lock requests a large-scale Siebel system
generates.
NFS server on AIX:
# chssys -s rpc.lockd -a 511
# stopsrc -s rpc.lockd; startsrc -s rpc.lockd
It's recommended that lockd be enabled on every client machine for
better load distribution.
NFS server on Solaris or HP-UX 11i v3 (Itanium):
/usr/lib/nfs/lockd [nthreads] nthreads should be set to a value of 200
initially.
This can also be set by defining the LOCKD_SERVERS parameter in the nfs
file. The lockd thread tuning for HP-UX is only available with 11i v3
(Itanium). Other HP-UX versions are unsuitable as SiebFS hosts. The local lock mount
option (llock) isn't supported by Siebel, because the
SiebFS relies on global, system-wide file locking. Local locking
causes data inconsistencies.
NFS server on Linux:
IMPORTANT NOTICE: To reduce the load on the locking subsystem, the
Anonymous user's preference file should be set to read-only. Accessing read-only files
doesn't generate a write lock request, since the file won't get altered. In other words,
no writes are generated during the anonymous phase of the sign in process. The anonymous
user's preference file is accessed for each session sign in. Failure to set it read-only
significantly increases the number of file lock requests. For example, to set the
preference file for user "GUESTCST" to read-only, run the following command: cd
filesystem/userpref chmod a-w "GUESTCST&Siebel Universal
Agent.spf".
The read-only setting will remain established when the corresponding user account is dedicated solely to anonymous sessions. If best practices are ignored and the account is also used for regular session log-in, the preferences file will be updated and the file attributes reset to read-write.
userpref folder, it will be automatically recreated with read-write
attributes, in which case the write attribute must be manually removed again. CFGSharedModeUsersDir set to local disk storage
to reduce dependency on NFS, thus reducing the likelihood of finding hanging/locking
issues related to .spf files residing on NFS. - EAIAnonObjMgr_enu
- EAIObjMgr_enu
- WfProcBatchMgr
- WfProcMgr
Example: change param CFGSharedModeUsersDir=/local/path/userpref for comp (or
compdef) WfProcBatchMgr
In addition to the above, SavePreferences preferences can be set to
False to avoid saving User Preferences at the component task completion.
Example:change param SavePreferences=False for comp (or compdef)
WfProcBatchMgr. In certain custom implementations, SavePreferences might be
required.
It's recommended that you perform testing in a lower environment to verify all works as expected before going live on Production.