Previous     Contents     Index          Next     
iPlanet Market Maker 1.0 Deployment Guide



Chapter 4   Installation and Configuration




Installation

At the end of the planning stage of your deployment, you should have decided on the production setup and the server architecture for your marketplace. The basic production scenario described in the previous chapter is only a suggestion. This chapter gives you an overview of the process of preparing, installing, and configuring the hardware and software systems to meet the requirements of smooth functioning in your marketplace. For detailed instructions on installing the iPlanet Market Maker software, refer to the iPlanet Market Maker Installation Guide. Before starting the installation of the iPlanet Market Maker software, follow the steps below in the same sequence to ensure proper installation.

  • Ensure connection with the Oracle database.

  • Check the Java version. At the command prompt, type :

       java -version

    It should report :1.2.2

  • Stop your web server.

  • Verify that the directory server is running.

  • If you had iPlanet Market Maker Beta 1 or Beta 2 installed, ensure that the previous installation is completely uninstalled and the contents removed from their respective directories. For details, refer to the iPlanet Market Maker Installation Guide.

Extract the iPlanet Market Maker installation files and install the software on the server(s) where the iPlanet Web Server is installed. Start the installation with the following steps and proceed to configure the systems.

  • Install and configure Solaris (with appropriate patches) on the servers designated to host the Oracle database, iPlanet Directory Server, and iPlanet Web Server (and of course, the iPlanet Market Maker software).

  • Install and configure Oracle 8.1.6 with interMedia option, iPlanet Directory Server 4.12, and iPlanet Web Server 4.1 SP3 on the designated servers. If you need to install iPlanet Directory Server, and iPlanet Web Server on the same machine, install iPlanet Directory Server first.

  • Install JDK patches for Solaris before installing the JDK 1.2.2_06.

  • Install and configure the Resonate application on those servers at the first layer of contact for the users where you want the load balancing to be carried out.



Configuration


Configuring Solaris

The following section outlines the optimal settings for Solaris before installing iPlanet Market Maker software. Please note that the iPlanet Market Maker software requires Solaris 2.6 or Solaris 2.8

To ensure that the iPlanet Market Maker software works properly, make sure that your Solaris Versions 2.6 and 2.8 have the latest patches. For more information about patches, go to http://sunsolve.Sun.COM/pub-cgi/show.pl?target=patches/patch-access


Solaris 2.6

On the command line, type :

uname -a

Make sure that the line returned contains the text:

Generic_105181-23

Install the following patches on those servers that host the iPlanet Web Server and iPlanet Market Maker software. The patch information given below contains patch IDs and levels. Ensure that the patches of the level given in the table below are installed. Information about the installed patches can be obtained from /var/sadm/patch.


Table 4-1    Patches for Solaris 2.6

Patch ID and level

Patch ID and level

Patch ID and level

Patch ID and level

Patch ID and level

105181  

23  

105580  

15  

105837  

3  

106468  

2  

107774  

1  

105210  

32  

105591  

9  

105924  

3  

106495  

1  

107991  

1  

105216  

4  

105600  

19  

106027  

8  

106522  

4  

108199  

1  

105284  

36  

105615  

8  

106040  

13  

106569  

1  

108201  

1  

105338  

25  

105621  

23  

106112  

5  

106592  

3  

108307  

2  

105356  

15  

105633  

41  

106123  

4  

106625  

5  

108346  

3  

105357  

4  

105642  

8  

106125  

10  

106639  

4  

108468  

2  

105375  

22  

105654  

3  

106172  

4  

106648  

1  

108492  

1  

105379  

5  

105665  

3  

106193  

5  

106649  

1  

108499  

1  

105401  

28  

105667  

2  

106222  

1  

106650  

4  

108660  

1  

105403  

3  

105669  

10  

106226  

1  

106828  

1  

108804  

1  

105464  

2  

105703  

22  

106235  

5  

106834  

1  

108890  

1  

105472  

7  

105720  

12  

106242  

2  

106894  

1  

108893  

1  

105529  

9  

105722  

5  

106257  

5  

107336  

1  

108895  

1  

105552  

3  

105741  

7  

106271  

6  

107434  

1  

109266  

1  

105558  

4  

105755  

8  

106301  

1  

107565  

2  

109339  

1  

105562  

3  

105780  

5  

106415  

3  

107618  

1  

109388  

1  

105566  

8  

105786  

12  

106437  

3  

107733  

8  

 

 

105568  

18  

105800  

6  

106439  

6  

107758  

1  

 

 

105570  

1  

105802  

12  

106448  

1  

107766  

1  

 

 


Solaris 2.8

On the command line, type :

uname -a

Make sure that the line returned contains the text:

Generic_108528-03.


Initial Oracle Configuration

The default installation is sized for a small sample database. The Oracle database should be set up and configured to meet the specific requirements of your marketplace. Some recommendations on configuring Oracle are given below. For more details, refer to the document, Oracle 8i Designing and Tuning for Performance Release 2 (8.1.6).


Recommended initial DBA setup

  1. Setting Oracle initialization parameters : The following are the recommended changes to the parameters in Oracle Initialization Parameters (init<SID>.ora) file.

    1. Resize SGA (System Global Area).

      SGA is the memory area that is available to an iPlanet Market Maker installation. To achieve optimal performance, it is recommended to set the SGA size as large as possible, depending on the memory available on the server. This can be achieved by changing the values for shared SQL pool and data block buffer cache settings.

      1. Set shared SQL pool : The value for this setting can be changed by setting shared_pool_size parameter (in bytes). Start by setting this value to 100 MB, and monitor the usage of shared SQL pool.

      2. Set data block buffer cache size: The data block buffer cache size is set by the db_block_buffers init.ora parameter (in database blocks). The recommended size of data block buffer cache is 2 percent of the database size. The recommended db_block_size for the iPlanet Market Maker installation is 8192 bytes.

    2. Improve sorting operation: Set the sort_area_size and sort_area_retained_size to minimize the sorts (disk) to less than 5% of the total sorts, because searching the catalog is resource intensive due to the hierarchical nature of queries and the interMedia option.

    3. Set db_block_lru_latches to a number equal to the number of CPUs on a multiple-CPU system.

  2. Net8 configuration :

    All the installation scripts use SQLPLUS, which in turn connect using the NET8 configuration to support installation from client machines. If you are installing on the server machine, make sure that Net8 is configured; and SQLPLUS should be able to connect to the service_name setup in Net8 (eg: sqlplus userid/passwd@service_name).


Database initialization parameters

Performance tests were conducted on an iPlanet Market Maker installation for the functions : catalog loading, searching, and browsing; auction bidding; and community registration (companies and users). The installation was sized to handle 1000 concurrent users and a 25 GB database. The following parameter settings are recommended based on the results of the above test.

Figure 4-1    Oracle Initialization parameters



Name

Value

Oracle Version compatible  

8.1.0  

db_block_buffers  

60736  

db_block_lru_latches  

30  

db_block_size  

8192  

instance_name  

VORTEX  

enqueue_resources  

20000  

log_buffer  

5120000  

log_checkpoint_interval  

1000000  

log_checkpoint_timeout  

0  

open_cursors  

500  

Processes  

1015  

rollback_segments  

( RBS0, RBS1, RBS2, RBS3, RBS4, RBS5, RBS6, RBS7, RBS8, RBS9, RBS10, RBS11, RBS12, RBS13, RBS14, RBS15, RBS16, RBS17, RBS18, RBS19, RBS20, RBS21, RBS22, RBS23, RBS24, RBS25, RBS26, RBS27, RBS28 )  

shared_pool_size  

419430400  

sort_area_retained_size  

1024000  

sort_area_size  

5120000  

timed_statistics  

TRUE  


For an explanation of other parameters, refer to Oracle 8i Administrators Guide, Release 2(8.1.6).



iPlanet Market Maker Configuration



After the installation, the iPlanet Market Maker software needs to be configured to ensure that the connections to LDAP and the Oracle database are available from the iPlanet Market Maker host and that the job scheduler runs properly.

The required settings in the iPlanet Market Maker software are:

    • Settings to connect to remote LDAP instances.

    • Settings to connect to remote Oracle instances.

    • Oracle configuration and settings.

    • Job Scheduler settings.

The following section describes the impact of individual modules on database performance and their recommended storage sizes for core tables.


Settings to Connect to Remote LDAP Instances

Presently the iPlanet Market Maker installer is not equipped to carry out a remote login and installation of the LDAP schema and data from a web server / iPlanet Market Maker host. However, the installer queries the user for the remote host names, port numbers, username, and password of the LDAP, even while carrying out the installation of the iPlanet Market Maker core, in order to make the appropriate settings in the property files. Specifying the correct locations of these hosts and port numbers during installation of the iPlanet Market Maker software makes the appropriate changes to the property files, VortexConfiguration and userxConfiguration. Nonetheless, the following settings should be checked to ensure that they point to correct locations after the installation.


To connect to a remote LDAP instance

On the web server / iPlanet Market Maker hosts, in the file $IMM_HOME/properties/com/iplanet/userx/userxconfig.properties, change the following settings to point to the correct host and port locations :

ldaphost: <remote host name>

ldapport: <port name on which the ldap instance listens >

On the web server / iPlanet Market Maker hosts, LDAP `read' and `write' user name and passwords can be found in the file $IMM_HOME/properties/VortexConfiguration.properties.


LDAP connection pool settings

On the web server / iPlanet Market Maker hosts in the file $IMM_HOME/properties/com/iplanet/userx/userxconfig.properties, change the following settings to new values :

ldappool.minimum=150 (Default Value : 10)

ldappool.maximum=300 (Default Value : 20)

Each connection handles client requests serially. The minimum and maximum settings depend on the number of concurrent users. 150/300 per LDAP connection pool is a safe setting for an expected number of 300 concurrent users per web server.


Settings to Connect to Remote Oracle Instances


To connect to a remote Oracle instance

The setting that points to the database URL for the type 4 drivers needs to be changed to ensure that the remote host name is included in the URL instead of the local host name.

On the web server / iPlanet Market Maker hosts in the file $IMM_HOME/properties/VortexConfiguration.properties, change the following setting to point to the correct host and port locations :

CFG_DATABASE_URL=-jdbc:oracle:thin:@<remote host name>:1521:<instance name>


Oracle connection pool settings

On the web server / iPlanet Market Maker hosts in the file $IMM_HOME/properties/VortexConfiguration.properties, change the following settings to new values :

CFG_CPOOL_MAXIMUM=500 ( Default Value : 30)

Multiple database connections per user are required to be opened for some of the requests received from the code. To avoid running out of database connections, 500 is a safe value for the maximum connection pool size/web server for loads of up to 300 concurrent users through each iPlanet Market Maker host.


Oracle Configuration and Settings

Each module (Catalog, Pricing, Community, Architecture (a.k.a Common), Auction, Negotiations, and Order Management System) has its own database schema owner. The schema owners are CAT, PRI, COM, CMN, AUC, RFX and OMS. The passwords are entered during installation for each module (for example, CAT/CAT_PWD). All the objects (that is, tables, sequences, and so on) owned by these schema owners are granted to the application user "VORTEX". The password for "VORTEX" user is entered during installation. The iPlanet Market Maker user interface connects to the database from the web servers using the "VORTEX" user. The table below gives the values that need to be given for the Oracle settings during installation.


Table 4-2    Oracle Settings

Default Tablespaces

Module Name

Schema / Password

Data

Index

Temporary

Architecture (common)

 

CMN/CMN_PWD

 

CMN_TS

 

CMN-XTS

 

TEMP

 

Catalog

 

CAT/CAT_PWD

 

CAT_TS

 

CAT_XTS

 

TEMP

 

Pricing

 

PRI/PRI_PWD

 

PRI_TS

 

PRI_XTS

 

TEMP

 

Auction

 

AUC/AUC_PWD

 

AUC_TS

 

AUC_XTS

 

TEMP

 

Negotiations

 

RFX/RFX_PWD

 

RFX_TS

 

RFX_XTS

 

TEMP

 

Order Management System

 

OMS/OMS_PWD

 

OMS_TS

 

OMS_XTS

 

TEMP

 

iPlanet Market Maker

 

VORTEX/VORTEX_PWD

 

CMN_TS

 

CMN_XTS

 

TEMP

 

Oracle schema creation is done as a part of installation using the iPlanet Market Maker installer. The iPlanet Market Maker installer gives the option of installing the following independently:

  • iPlanet Market Maker Core.

  • iPlanet Market Maker Directory Server Database.

  • iPlanet Market Maker Oracle Database.

Make sure that the option to create Oracle schema is checked (check box). The installer uses the $IMM_HOME/db/install/install_db_rel.sh file script by passing the appropriate parameters. All the SQL scripts that create the schema have drop statements before creating the objects. The following are the points worth noting during schema creation:

    • All the data tablespaces are created in the same directory, which is provided as "datafile location" with the same datafile size during installation. If you have provided /u01/oracle/oradata/<SID> as the directory and 100MB as the size, all the files will be created in that directory (e.g: /u01/oracle/oradata/<SID>/auc01.dbf).

    • All the index tablespaces are created in the same directory which is provided as "index datafile location" with the same data file size during installation. If you have provided /u02/oracle/oradata/<SID> as the directory and 100MB as the size, all the files will be created in that directory (e.g: /u02/oracle/oradata/<SID>/auc01.dbf).

    • All the schema and vortex users are created, and their temporary tablespace set to use TEMP tablespace.

    • The initial installation uses the default settings provided by the scripts for create table and index statements.

    • The SQL scripts have a drop statement before each of the create statements. During initial installation, all the drop statements are going to throw errors, because there are no iPlanet Market Maker objects in the database. Subsequent runs will drop and recreate the tablespaces, users, and objects.

    CAUTION:

    Executing any of these scripts ($IMM_HOME/db/install/install_db_rel.sh, $IMM_HOME/db/install/install_db.sh or any SQL script) will erase the iPlanet Market Maker schema without warning. Do not execute these scripts in a production system when the iPlanet Market Maker installation is up and running and has data in it. These are install scripts and should be executed only during installation.

    • An error log is generated for each execution of the script. Also, a master log file, which contains all the logs including the script execution, is created. These logs can be helpful in trouble shooting. However, it will continue due to drop statements, and these should be ignored.

    • After the base installation is done, the installer unzips and copies all the scripts to their appropriate locations. To make the changes to the schema scripts for storage, etc., the scripts need to be edited and executed again. Make a backup of the file and do not change the name of the file.

There are two ways to recreate the schema using the changed parameters :

  1. Re-installing the database schema using the iPlanet Market Maker installer:

    1. Run the iPlanet Market Maker installer.

    2. Uncheck the iPlanet Market Maker Core and iPlanet Market Maker Directory Server database.

    3. Make sure that the iPlanet Market Maker Oracle database is checked.

    4. Re-enter the same parameters that have been used during initial installation for encryption, passwords, LDAP details, etc. Do not make any changes from initial input when re-installing.

    5. Verify changes in the database.

  2. Re-installing the database schema using iPlanet Market Maker scripts:

    1. Follow the installation instructions in "Oracle".

    2. Make sure that you enter the right parameters required for $IMM_HOME/db/install/install_db_rel.sh. (for example: $IMM_HOME/db/install/install_db_rel.sh param1 param2. See $IMM_HOME/db/install/install_db.sh for a sample).

    3. Verify changes in the database.


Sizing objects

The installation scripts for iPlanet Market Maker software can be found in the directories $IMM_HOME/db/sql and $IMM_HOME/db/<module_name>/db/sql. The $IMM_HOME/db/sql directory contains the create SQL script which creates the tablespaces and schema users. The installer passes the datafile, index file location, and its sizes to this script. Also, each module (eg: cmn, auc, catalog etc.) has a directory sql, which contains the create table scripts (for example : $IMM_HOME/db/catalog/db/sql/*.sql files). Each table is created using these scripts. The storage parameters for the create tables scripts need to be examined, and changed based on the size of the marketplace. Default values can be given to most of the create table parameters such as pctfree, pctused, initial extents, next, and pctincrease. Database administrators are advised to change the sizing parameters for the tables depending on the data storage requirements for the tables. "Oracle Table Sizing Information in Appendix A serves as a guide in calculating the storage parameters. The average row lengths presented in the table are based on performance tests carried out on iPlanet Market Maker installation in the test labs. The actual row lengths for each installation will vary, and the DBA should add 10 to 30 percent to the provided values based on their judgement.


Catalog module

The size of the catalogs (both the master catalog and the buyer catalogs) affects the response times of the search and browse operations directly. When the master catalog is loaded, some of the main tables opened are CAT_MASTER_ENTRY, CAT_ATTR, and CAT_SEARCH_ATTR. The number of initial and next 'extents' should be configured to match the size of the catalog to achieve best performance. When buyers log in and create buyer catalogs (catalog views and not Oracle views), the table CAT_VIEW_ENTRY is populated with the pointers to the tables CAT_MASTER_ENTRY and CAT_ATTR. A seller catalog of 100K entries in CAT_MASTER_ENTRY can have 500K rows in CAT_VIEW_ENTRY, if five views are created by five different buyers (showing all the master catalog). So the number of rows in CAT_VIEW_ENTRY can grow as buyers create views of master catalogs. Creation of buyer catalogs also adds records into the CAT_ATTR table for each category.

The following table gives the recommended sizes of 'extents' for core tables and their indices for the Catalog module. These numbers can vary depending on catalog structure (categories and items) and the number of attributes for each item.

Table 4-3    Recommended sizes of `extents' for Catalog database

Oracle Table (Indices)

Rows

(Catalog items)

Recommended storage

Size of `extents'

CAT_MASTER_ENTRY  

100000  

20 MB  

CAT_MASTER_ENTRY(index)  

100000  

5 MB  

CAT_VIEW_ENTRY  

275000  

60 MB  

CAT_VIEW_ENTRY(index)  

275000  

15 MB  

CAT_ATTR  

800000  

130 MB  

CAT_ATTR(index)  

 

35 MB  

CAT_SEARCH_ATTR  

20000  

3 MB  

CAT_SEARCH_ATTR(index)  

20000  

1 MB  


Auctions module

The size of 'extents' for the core tables and indices of the Auctions module is determined by the number of auctions and the number of bids for each auction

Table 4-4    Recommended sizes of `extents' for Auctions database



Oracle Table (Indices)

Rows

(Auction components)

Recommended storage

size of `extents'

AUC_AUCTION  

30000  

50 MB  

AUC_AUCTION(indices)  

30000  

12 MB  

AUC_BID_HISTORY  

400000  

100 MB  

AUC_BID_HISTORY(indices)  

400000  

35 MB  

In-box notification

In-box notifications and messages are stored in the iPlanet Market Maker infrastructure under the tables INBOX_ENTRY and CMN_PLM. The INBOX_ENTRY table contains all the inbox notifications. CMN_PLM is the container for all the custom messages for each object based on Java property files.

Table 4-5    Recommended sizes of `extents' for In Box Notification



Oracle Table (Indices)

Rows

(Auction components)

Recommended storage

size of `extents'

INBOX_ENTRY  

100000  

60 MB  

INBOX_ENTRY(indices)  

100000  

25 MB  

CMN_PLM  

540000  

45 MB  

CMN_PLM(indices)  

5400000  

12 MB  

CMN_PLM_STRING  

650000  

30 MB  

CMN_PLM_STRING(indices)  

650000  

14 MB  

The 'extent' sizes are calculated based on default sizing parameters (percent free, etc.) for the respective tables, as provided in the iPlanet Market Maker 1.0 table scripts. If the values for other parameters are changed, the extent sizes should be recalculated.


Recommended periodic DBA maintenance activities:

Apart from daily DBA monitoring and logging activities, the following steps are recommended :

  • The queries are tuned to run using a cost-based optimizer. So, you must analyze the tables and indices and gather statistics periodically.

  • The catalog module uses the interMedia option for text searches. Text indices are not automatically updated by Oracle; therefore, it is necessary to manually synchronize the indices with any changes that have taken place in the actual tables. This synchronization must be performed by the Oracle administrator.

    In general, operating on the catalog indices requires the following steps, where the value PARAM will be replaced by an actual value, depending on the operation:

    1. Login to the Oracle instance that is hosting the iPlanet Market Maker database as user cat, password cat.

    2. Execute "alter index cat.cat_atr_value_txt1 rebuild online parameters('PARAM');" (Note the single quotes surrounding PARAM - these are necessary).

      In the above command, replace 'PARAM' with the following values.

      For synchronization of the indices, 'PARAM' = sync.

      For full optimization of the indices, 'PARAM' = optimize full.

      For full optimization, but only allowing the optimizer to run for a certain length of time, 'PARAM' = optimize full maxtime N, where N is the time in minutes that the optimizer will run.

We recommend the use of the ctxsrv process (or dbms_job process) to perform synchronization. This process (written and supplied by Oracle) provides regular synchronization updates. Execute the following steps:

  1. Log in (as the Oracle user) to the machine on which the iPlanet Market Maker database will be hosted.

  2. Execute the following command:

          ctxsrv [-user ctxsys/passwd[@sqlnet_address]]

    [-personality M]

    [-logfile log_name]

    [-sqltrace]

          ctxsrv -user ctxsys/ctxsys -personality M -log ctx.log &

  3. To stop the ctxsrv process, execute the following steps:

    Log in (as the Oracle user) to the machine on which the iPlanet Market Maker database will be hosted. Using SQLPLUS, log into Oracle as user ctxsys, password ctxsys, and execute: begin ctx_adm.shutdown; end;

In addition, the DBA should perform a full optimization of the indices regularly, depending on the amount of change that has occurred to the catalog. DBA should run the following command periodically to handle fragmentation of interMedia indices because running ctxsrv contributes to index fragmentation.

alter index myindex rebuild online parameters ('optimize fast');

or

alter index myindex rebuild online parameters ('optimize full');

Refer to Oracle 8i interMedia Text Reference, Release 2(8.1.6) for more details on performing periodic maintenance activities.


Job Scheduler Settings

The Job Scheduler runs in its own JVM, separate from the JVM in which the presentation and runtime layers of the iPlanet Market Maker software run.

The Job Scheduler process reads the loaded properties from the file VortexConfiguration.properties. This file has to be included in the classpath for the Job Scheduler process. The following properties must be modified for the Job Scheduler process:

  • Maximum Heap Size for the JVM

  • OnceOnly and Repeatable timer thread pool sizes

  • Sleep interval for OnceOnly thread

  • Repeat Sleep Interval

  • Transactional Mail Scheduler Interval

  • Max threads in Notifier Pool


Maximum Heap Size for the JVM

To set the maximum heap size for the JVM in which the Job Scheduler process runs, set the Xmx256M option to 256 MB. This maximum heap size is sufficient for about 600 jobs registered with the Job Scheduler process.

#CFG_JS_EXE

# - The JobScheduler process.

37=java -Djava.compiler=NONE -Dvortex.logFile=jobscheduler.log -Xmx256m

com.iplanet.ecommerce.vortex.util.scheduler.JobSchedulerProcess


OnceOnly and Repeatable Timer Thread Pool Sizes

Notifications about marketplace events, such as bidding on auctions, submitting orders to vendors, or approval of requisitions, are sent to users by email or to the users' Shared In-Box. Users can configure this notification job, and the Job scheduler is the process that ensures that such notification jobs are triggered.

The Job Scheduler process has a OnceOnly timer thread and a Repeatable timer thread.

    • OnceOnly timers are related to jobs that have to be executed only once at a pre-specified time (for example, Auctions opening and closing).

    • Repeatable timers are related to jobs that must be executed repeatedly at a pre-defined time interval.

These two threads keep a "Job List" updated for two worker thread pools - the OnceOnly worker thread pool and the Repeatable worker thread pool. To provide for concurrent job requests, the numbers of workers in these two worker thread pools must be set to appropriate values.


OnceOnly Worker Thread Pool Size

The major load on the OnceOnly worker threads will be auctions opening and closing events. The actual value of the pool size should be determined based on the volume of auctions opening/closing on a site each day. For a fairly auction- intensive site, the safe value is 20, because all 20 worker threads will be called on from their wait state only if there are 20 concurrent auction open or close jobs at the same instant on the site.

# CFG_JS_ONCEONLY_NUM_WORKERS

# Total number of workers in the thread pool for the OnceOnly Timer.

38=20 ( Default value = 3 )


Repeatable Worker Thread Pool Size

The two repeatable timers that can be registered with the Job Scheduler are the Transactional Mail Scheduler and the Rfx Job Scheduler.

    • The Transactional Mail Scheduler is a repeatable timer implemented for reading new entries in mail_info and handing over the registered notifications to the Notifier thread pool for delivery to the Mail Transfer Agent.

    • The Rfx Job Scheduler is the Rfx reaper that marks as pending those Rfx requests and replies that are due to expire in less than or equal to one day.

To account for any concurrent repeatable requests from these timers, 4 is a safe value for the Repeatable thread pool size.

#CFG_JS_REPEAT_NUM_WORKERS

# - Total number of workers in the thread pool for the Repeat Timer.

39=4 ( Default value = 3 )


Sleep Interval for OnceOnly Thread

The OnceOnly database polling thread polls the cmn_job_scheduler_entry table in order to read any new or revised timers after the last read in order to keep the job queue for the OnceOnly thread pool in synchronization with additions and changes to the timers.

A sleep of two minutes between polls is a lower load on the database and is not too great a sacrifice in terms of synchronizing with the current status of timers and the job queue for the OnceOnly pool. In a rare case, one would have auction-open timers registered for opening within two minutes of the job registry or have close times extended just two minutes before auction close time. Only under such rare occurrences, the two-minute sleep could result in unacceptable delayed callbacks. In such cases, decrease the sleep interval to an appropriate value.

#CFG_JS_ONCEONLY_SLEEP_INTERVAL

# - Sleep interval for the OnceOnly timer database thread in milliseconds.

40=120000 ( Default value = 30000 )


Repeat Sleep Interval

The Repeat database polling thread polls the cmn_job_scheduler_entry table in order to read any new or revised timers since the last read in order to keep the job queue for the Repeat thread pool in synchronization with additions and changes to the timers.

The two-minute sleep between polls is a good trade off between the load on the database and the synchronizing of the current repeat timer status and the job queue for the Repeat thread pool. The Transactional Mail Scheduler and the RfxJobScheduler repeat timers are registered at module initialization time and are not expected to have any immediate changes after registry.

#CFG_JS_REPEAT_SLEEP_INTERVAL

# - Sleep interval for the Repeat timer database thread.

41=120000 ( Default value = 60000 )


Transactional Mail Scheduler Interval

The Transactional Mail Scheduler (TMS) is a repeatable timer registered by the application for the purpose of sending out email notifications. The time for callback can be configured by a setting in the VortexConfiguration.properties file. On callback, the TMS reads the entries in the mail_info table and makes calls to the Notification API to send out the emails and deletes the entries in the mail_info table after they have been handed over to the Notification API.

The sleep interval for the TMS can be increased from its default value of 30 seconds to 120 seconds. This lowers the database load considerably while sacrificing a bit on the up-to-the-current-second correctness of the email notifications. The time lag between handing over an email notification to the Notifier Thread Pool and its actual delivery depends largely on the settings in the mail tranfer agents involved in the delivery chain. Therefore, this sacrifice is negligible if it buys some performance gains.

#CFG_TRANSACTIONAL_MAIL_SCHEDULER_INTERVAL

# - Timeout interval parameter for the MailScheduler callback. (used for Notifications)

60=120000 ( Default value = 30000 )


Max Threads in Notifier Pool

The Notification API hands over the dispatch of all emails to a set of worker threads called the Notifier Thread pool. Increasing the sleep interval of the Transactional Mail Scheduler would result in a larger load of emails the TMS hands over to the Notifier Thread Pool whenever its callback is executed. The larger load on the pool necessitates a corresponding increase in the thread pool size. The actual setting will depend on the volume of email notifications expected to be generated in the sleep period for the Transactional Mail Scheduler.

#CFG_MAX_THREADS_IN_NOTIFIER_POOL

# - Number of threads in the notifier thread pool.

32=30 ( Default value = 10 )


Installing and Configuring Resonate

Resonate Central Dispatcher is a full-featured, software-based, load-balancing solution. Running on a variety of platforms, it allows one to load balance across a large number of servers to provide massive horizontal scalability. Moreover, it is intelligent enough to detect the unlikely case where a server is unavailable and redirect the load to the remaining servers.

These features give the iPlanet Market Maker software the scalability and fail-over answers demanded by 24x7 business-to-business marketplaces.


Deployment

When deploying the iPlanet Market Maker software across multiple servers, it is necessary to deploy Resonate alongside to provide uniform access to your servers. Resonate is configured by setting up a virtual IP address to which all traffic will be directed. The controller of the load-balancing cluster will listen from this IP and redirect the traffic according to the load-balancing rules set in the configuration of Resonate.

Resonate should be deployed at the layer you want load-balanced. Typically this is the first layer with which your users will have contact, the one exposed to the internet / extranet. In a three-tiered scenario, this would typically be the proxy layer. In a two- tiered layer, it would be the web servers themselves.


Configuration

Because the iPlanet Market Maker software uses cookies for storing the session key, Resonate should be configured to stay connected to the same server after a session cookie is obtained. This requires some modification on the web server to enable different cookie names to be passed back from each different web server.

Resonate is a full-featured product, and so the discussion of all of its configuration parameters is beyond the scope of this document. The following are typical configuration steps for a multi-tiered scenario. It is assumed that you have already installed Resonate Central Dispatcher on each of the servers that you wish to have load-balanced. The default installed components should be sufficient.

  1. Edit your web server configuration files to change the session cookie name to a server-specific name.

    1. Locate the file contexts.properties on your web server. For example,

         /usr/netscape/server4/https-myserver/config/contexts.properties

    The parameter that modifies this name and its default value is : (this parameter is commented out by default):

       context.global.sessionCookie=NSES40Session

    1. Change this to your server-specific name.

         context.global.sessionCookie=NSES40SessionServer1

  2. Repeat step 1 across all of the web servers that you wish to load-balance varying only the session cookie name for each server.

  3. Restart the web servers

  4. Launch the Resonate Central Dispatcher Dispatch Manager application. Under default directories this can be found under /usr/local/resonate/bin/dispatch-manager.

    This launches a Java GUI from which you can configure your Resonate Central Dispatch cluster.

  5. First it will prompt you for the node to which to connect.

    1. Make sure you have the Resonate agent running on the server to which you are connecting (probably the local host). If not, you can run it by:

            <Resonate Home>/bin/cdagentctl start

    2. When prompted for the password, give the system password under which Resonate was installed. (The account is a user account, and the password can be set via the passwd command). You will be brought to the main window of the Central Dispatcher.

  6. Select `Site Setup' from the Bottom menu to begin setting up the nodes.

    1. Select the `System Nodes' tab. Type the fully qualified server name in the node name field. Define an alias of your choice and select `Add'.

    2. Repeat the steps for each of the servers in your cluster.

    3. Click `Apply changes'.

  7. Create the virtual IP, which will represent the cluster. This should be the public name of the servers (e.g. www.sun.com), which is mapped to the virtual IP address to which the cluster will listen.

    1. Select the VIP tab and fill in the VIP field with the fully qualified domain name of the virtual IP.

    2. Fill in the IP address with the appropriate IP.

    The Primary Scheduler field will be the server that is actually listening to the traffic coming in and redirecting it to the appropriate servers. This will place a small additional load on this server.

    The Backup Scheduler field will be the server that takes over listening to the VIP in case the Scheduler server goes down. It gets promoted to the Primary Scheduler.

  8. The Policies tab refers to the way you want the load-balancing to take place. Your default CPU Load and Open Connections priority should be sufficient.

  9. Select Scheduling Rules from the bottom. Now we are ready to set up the load balancing.

    1. The first step is to enable basic load-balancing rules on the HTTP port. This will permit load balancing on the first hit to the site. It will load-balance all generic HTTP traffic across all the servers.

    2. On the HTTP tab, select your VIP on the left. Select the Virtual Port on to which to listen (the port on which traffic will arrive on the public side).

    3. Select the server ports as the ports on which your load balanced servers listen (typically 80). Select * on the right to balance the load across all servers registered with this scheduler.

    4. Select the resource as URL and the path expression as * to enable all references to the URL to be directed across the servers.

  10. Select the `Cookie Persistence' tab to set up the cookie-persistent load- balancing.

    1. Select your VIP on the left and set the virtual port as you did in step 8b.

    2. Select the server ports as in step 8c, and select one of the servers (e.g., server1).

    3. Select the Resource as Cookie.

    4. Set the attribute-value pair to the session cookie name you specified for that server in step 1 and append to it "=*". For example,

            NSES40SessionServer1=*

    5. Click Add.

    6. Repeat these steps for each server, varying the attribute-value pair accordingly.

    7. Click `Apply Changes'.

      This will make the sessions persist to a single server while the cookie is still active.

  11. Start the Resonate Cluster.

    1. Select `Site Setup' from the bottom menu, then the `Operations' tab.

    2. Click `Start Site'.

Now the Resonate Cluster should be active. You should be able to hit the site transparently across all the servers in the cluster through the VIP that you specified.

This was a basic configuration of Resonate Central Dispatch, which should be sufficient for simple deployments of the iPlanet Market Maker software. Consult the Resonate Installation and User's Guide for further information.


Previous     Contents     Index          Next     
Copyright © 2000 Sun Microsystems, Inc. Some preexisting portions Copyright © 2000 Netscape Communications Corp. All rights reserved.

Last Updated February 05, 2001