Sun GlassFish Web Space Server 10.0 Administration Guide

Clustering of Web Space Server

Once you have Web Space Server installed in more than one node on your application server, there are several optimizations that need to be made. At a minimum, Web Space Server should be configured in the following way for a clustered environment:


Note –

The default HSQL database can't be used in a clustered environment. You can configure MySQL or any other compatible database to use in a clustered environment.

A cluster setup need to use the enterprise version of GlassFish 2.1 patch 2 or later.


Many of these configuration changes can be made by adding or modifying properties in your portal-ext.properties file. Remember that this file overrides the defaults that are in the portal.properties file. The original version of this file can be found in the Liferay source code or can be extracted from the portal-impl.jar file in your Liferay installation. It is a best practice to copy the relevant section that you want to modify from portal.properties into your portal-ext.properties file, and then modify the values there.

Jackrabbit Sharing

Web Space Server uses Jackrabbit from Apache as its JSR-170 compliant document repository. By default, Jackrabbit is configured to store the documents on the local file system upon which Liferay is installed, in the Glassfish home/domains/domain1/webspace/jackrabbit folder. Inside this folder is Jackrabbit's configuration file, called repository.xml.

To simply move the default repository location to a shared folder, you do not need to edit Jackrabbit's configuration file. Instead, find the section in portal.properties labeled JCR and copy/paste that section into your portal-ext.properties file. One of the properties, by default, is the following:


jcr.jackrabbit.repository.root=${resource.repositories.root}/jackrabbit

Change this property to point to a shared folder that all of the nodes can see. A new Jackrabbit configuration file is generated in that location.

Note that because of file locking issues, this is not the best way to share Jackrabbit resources. If two people have logged in at the same time uploading content, you could encounter data corruption using this method, and therefore it is not used for a production system. Instead, to enable better data protection, you should redirect Jackrabbit into your database of choice. You can use a database for this purpose. This requires editing Jackrabbit's configuration file.

The default Jackrabbit configuration file has sections commented out for moving the Jackrabbit configuration into the database. This has been done to make it as easy as possible to enable this configuration. To move the Jackrabbit configuration into the database, simply comment out the sections relating to the file system and comment in the sections relating to the database. These by default are configured for a MySQL database. If you are using another database, you will likely need to modify the configuration, as there are changes to the configuration file that are necessary for specific databases. For example, the default configuration uses Jackrabbit's DbFileSystem class to mimic a file system in the database. While this works well in MySQL, it does not work for all databases. For example, if you are using an Oracle database, you will need to modify this to use OracleFileSystem. Please see the Jackrabbit documentation at http://jackrabbit.apache.org for further information.

You will also likely need to modify the JDBC database URLs so that they point to your database. Don't forget to create the database first, and grant the user ID you are specifying in the configuration file access to create, modify, and drop tables.

Once you have configured Jackrabbit to store its repository in a database, the next time you bring up Liferay, the necessary database tables will be created automatically. Jackrabbit, however, does not create indexes on these tables, and so over time this can be a performance penalty. To fix this, you will need to manually go into your database and index the primary key columns for all of the Jackrabbit tables.

All of your Liferay nodes should be configured to use the same Jackrabbit repository in the database. Once that is working, you can create a Jackrabbit cluster (please see the following section).

Lucene Configuration

Lucene, the search indexer which Web Space Server uses, can be in a shared configuration for a clustered environment, or an index can be created on each node of the cluster. If you wish to have a shared index, you will need to either share the index on the file system or in the database.

The Lucene configuration can be changed by modifying values in your portal-ext.properties file. Open your portal.properties file and search for the text Lucene. Copy that section and then paste it into your portal-ext.properties file.

If you wish to store the Lucene search index on a file system that is shared by all of the Web Space Server nodes, you can modify the location of the search index by changing the lucene.dir property. By default, this property points to the /webspace/lucene folder inside the home folder of the user running Web Space Server:


lucene.dir=${resource.repositories.root}/lucene/

Change this to the folder of your choice. To make the change take effect, you will need to restart Web Space Server. You can point all of the nodes to this folder, and they will use the same index.

Like Jackrabbit, however, this is not the best way to share the search index, as it could result in file corruption if different nodes try reindexing at the same time. A better way is to share the index via a database, where the database can enforce data integrity on the index. This is very easy to do; it is a simple change to your portal-ext.properties file.

There is a single property called lucene.store.type. By default this is set to go to the file system. You can change this so that the index is stored in the database by making it the following:


lucene.store.type=jdbc

The next time Web Space Server is started, new tables will be created in the Web Space Server database, and the index will be stored there. If all the Web Space Server nodes point to the same database tables, they will be able to share the index.

Alternatively, you leave the configuration alone, and each node will then have its own index. This ensures that there are no collisions when multiple nodes update the index, because they all will have separate indexes.

Hot Deploy

Plugins which are hot deployed will need to be deployed separately to all of the Web Space Server nodes. Each node should, therefore, have its own hot deploy folder. This folder needs to be writable by the user under which Web Space Server is running, because plugins are moved from this folder to a temporary folder when they are deployed. This is to prevent the system from entering an endless loop, because the presence of a plugin in the folder is what triggers the hot deploy process.

When you want to deploy a plugin, copy that plugin to the hot deploy folders of all of the Web Space Server nodes. The hot deploy directory for Web Space Server when running on GlassFish is Glassfish home/domains/domain1/webspace/deploy. Depending on the number of nodes, it may be best to create a script to do this. Once the plugin has been deployed to all of the nodes, you can then make use of it (by adding the portlet to a page or choosing the theme as the look and feel for a page or page hierarchy).

Some containers contain a facility which allows the end user to deploy an application to one node, after which it will get copied to all of the other nodes. If you have configured your application server to support this, you won't need to hot deploy a plugin to all of the nodes, as your application server will handle it transparently. Make sure, however, that you use hot deploy mechanism to deploy plugins, as in many cases Web Space Server slightly modifies plugin.war files when hot deploying them.