Once you have WebSynergy installed in more than one node on your application server, there are several optimizations that need to be made. At a minimum, WebSynergy should be configured in the following way for a clustered environment:
The default HSQL database can't be used in a clustered environment. You configure MySQL or any other compatible database to use in a clustered environment.
All nodes should be pointing to the same Liferay database
Jackrabbit, the JSR-170 content repository, should be on a shared file system (not recommended) or in a database that is shared by all nodes.
Similarly, Lucene, the full text search indexer, should be:
On a shared file system available to all the nodes (not really recommended, though), or
In a database that is shared by all the nodes, or
On separate file systems for all of the nodes, or
Disabled, and a separate pluggable enterprise search server configured.
If you have not configured your application server to use farms for deployment, the hot deploy folder should be a separate folder for all the nodes, and plugins will have to be deployed to all of the nodes individually. This can be done via a script.
Many of these configuration changes can be made by adding or modifying properties in your portal-ext.properties file. Remember that this file overrides the defaults that are in the portal.properties file. The original version of this file can be found in the Liferay source code or can be extracted from the portal-impl.jar file in your Liferay installation. It is a best practice to copy the relevant section that you want to modify from portal.properties into your portal-ext.properties file, and then modify the values there.
WebSynergy uses Jackrabbit from Apache as its JSR-170 compliant document repository. By default, Jackrabbit is configured to store the documents on the local file system upon which Liferay is installed, in the <Glassfish home>/domains/domain1/websynergy/jackrabbit folder. Inside this folder is Jackrabbit's configuration file, called repository.xml.
To simply move the default repository location to a shared folder, you do not need to edit Jackrabbit's configuration file. Instead, find the section in portal.properties labeled JCR and copy/paste that section into your portal-ext.properties file. One of the properties, by default, is the following:
jcr.jackrabbit.repository.root=${resource.repositories.root}/jackrabbit |
Change this property to point to a shared folder that all of the nodes can see. A new Jackrabbit configuration file will be generated in that location.
Note that because of file locking issues, this is not the best way to share Jackrabbit resources. If you have two people logged in at the same time uploading content, you could encounter data corruption using this method, and because of this, we do not recommend it for a production system. Instead, to enable better data protection, you should redirect Jackrabbit into your database of choice. You can use a database for this purpose. This will require editing Jackrabbit's configuration file.
The default Jackrabbit configuration file has sections commented out for moving the Jackrabbit configuration into the database. This has been done to make it as easy as possible to enable this configuration. To move the Jackrabbit configuration into the database, simply comment out the sections relating to the file system and comment in the sections relating to the database. These by default are configured for a MySQL database. If you are using another database, you will likely need to modify the configuration, as there are changes to the configuration file that are necessary for specific databases. For example, the default configuration uses Jackrabbit's DbFileSystem class to mimic a file system in the database. While this works well in MySQL, it does not work for all databases. For example, if you are using an Oracle database, you will need to modify this to use OracleFileSystem. Please see the Jackrabbit documentation at http://jackrabbit.apache.org for further information.
You will also likely need to modify the JDBC database URLs so that they point to your database. Don't forget to create the database first, and grant the user ID you are specifying in the configuration file access to create, modify, and drop tables.
Once you have configured Jackrabbit to store its repository in a database, the next time you bring up Liferay, the necessary database tables will be created automatically. Jackrabbit, however, does not create indexes on these tables, and so over time this can be a performance penalty. To fix this, you will need to manually go into your database and index the primary key columns for all of the Jackrabbit tables.
All of your Liferay nodes should be configured to use the same Jackrabbit repository in the database. Once that is working, you can create a Jackrabbit cluster (please see the section below).
Lucene, the search indexer which WebSynergy uses, can be in a shared configuration for a clustered environment, or an index can be created on each node of the cluster. If you wish to have a shared index, you will need to either share the index on the file system or in the database.
The Lucene configuration can be changed by modifying values in your portal-ext.properties file. Open your portal.properties file and search for the text Lucene. Copy that section and then paste it into your portal-ext.properties file.
If you wish to store the Lucene search index on a file system that is shared by all of the WebSynergy nodes, you can modify the location of the search index by changing the lucene.dir property. By default, this property points to the /websynergy/lucene folder inside the home folder of the user running WebSynergy:
lucene.dir=${resource.repositories.root}/lucene/ |
Change this to the folder of your choice. To make the change take effect, you will need to restart WebSynergy. You can point all of the nodes to this folder, and they will use the same index.
Like Jackrabbit, however, this is not the best way to share the search index, as it could result in file corruption if different nodes try reindexing at the same time. We do not recommend this for a production system. A better way is to share the index via a database, where the database can enforce data integrity on the index. This is very easy to do; it is a simple change to your portal-ext.properties file.
There is a single property called lucene.store.type. By default this is set to go to the file system. You can change this so that the index is stored in the database by making it the following:
lucene.store.type=jdbc |
The next time WebSynergy is started, new tables will be created in the WebSynergy database, and the index will be stored there. If all the WebSynergy nodes point to the same database tables, they will be able to share the index.
Alternatively, you leave the configuration alone, and each node will then have its own index. This ensures that there are no collisions when multiple nodes update the index, because they all will have separate indexes.
Plugins which are hot deployed will need to be deployed separately to all of the WebSynergy nodes. Each node should, therefore, have its own hot deploy folder. This folder needs to be writable by the user under which WebSynergy is running, because plugins are moved from this folder to a temporary folder when they are deployed. This is to prevent the system from entering an endless loop, because the presence of a plugin in the folder is what triggers the hot deploy process.
When you want to deploy a plugin, copy that plugin to the hot deploy folders of all of the WebSynergy nodes. The hot deploy directory for WebSynergy when running on GlassFish is <Glassfish home>/domains/domain1/websynergy/deploy. Depending on the number of nodes, it may be best to create a script to do this. Once the plugin has been deployed to all of the nodes, you can then make use of it (by adding the portlet to a page or choosing the theme as the look and feel for a page or page hierarchy).
Some containers contain a facility which allows the end user to deploy an application to one node, after which it will get copied to all of the other nodes. If you have configured your application server to support this, you won't need to hot deploy a plugin to all of the nodes, as your application server will handle it transparently. Make sure, however, that you use hot deploy mechanism to deploy plugins, as in many cases WebSynergy slightly modifies plugin.war files when hot deploying them.