Sun GlassFish Web Space Server 10.0 Administration Guide

Chapter 8 Advanced Web Space Server Configuration

There are also some lower level settings that you may want to further customize. They include changing certain out-of-box defaults, security configuration, and adding features to Web Space Server through plugin management. The following sections are explained in this chapter:

Deploying Applications to Web Space Server

Sun GlassFish Web Space Server server provides an extensible platform where custom applications which includes portlets, web plugins, hook plugins and theme plugins can be developed (using Netbeans Portal Pack) and deployed to augment the functionality of Web Space Server. The artifacts can be copied into the Hot Deploy directory of Web Space Server. The Hot Deploy listener injects the necessary libraries into the WAR file and deploys them to the GlassFish Enterprise Server by taking advantage of JSR-88 feature or the auto deploy directory feature GlassFish Enterprise Server.

In many production deployments it is possible that the auto deploy is disabled for security reasons or the server is deployed to the GlassFish cluster where the applications should always be deployed to the DAS (Domain Administration Server). In these two cases, copying the WAR files to the Hot Deploy directory for deploying the applications is not possible. To address this scenario of deployment Web Space Server provides a command line tool process.xml that which generate a "massaged" WAR (with all required injections that are otherwise performed by the Hot Deploy listener) that can be manually deployed to the server using asadmin tool.

Deploying Applications Using process.xml

Consider the example of deploying a custom portlet solr-web-5.2.0.1.war to Web Space Server using process.xml.

ProcedureTo Deploy Applications Using process.xml

  1. Copy the custom portlet (solr-web-5.2.0.1.war) into the unprocessed directory located inside the root directory for Web Space Server (the directory to which Web Space Server is unzipped).

    cp solr-web-5.2.0.1 /Webspace_install_root/var/webspace/war-workspace/unprocessed

  2. Navigate to Webspace_install_root/var/webspace/war-workspace folder.

    cd /Webspace_install_root/var/webspace/war-workspace

  3. Run ant -f process.xml.

    Provide the required inputs where necessary. The below screen illustrates the process.


    $ cp solr-web-5.2.0.1 /Webspace_install_dir/var/webspace/war-workspace/unprocessed
    $ cd /Webspace_install_dir/var/webspace/war-workspace
    $ ant -f process.xml 
    Buildfile: process.xml
    
    check-ant:
    
    show-user-warning:
        [input] JAVA_HOME must be set to JDK 1.5 or greater and java must be available for execution. Webcontainer must be stopped. [RETURN to continue or CONTROL-C to stop]
    
    
    set-war-properties:
        [input] Enter war file (include full path)  [/Webspace_install_dir/var/webspace/war-workspace/my.war]
    /Webspace_install_dir/var/webspace/war-workspace/unprocessed/solr-web-5.2.0.1.war
        [input] Is war a portlet, web, theme, hook or layouttemplate?  [portlet]
    web
        [input] Enter deployed war name  [solr-web-5.2.0.1]
    
    
    process:
         [java] Loading jar:file:/Webspace_install_dir/var/webspace/war-workspace/sources/webspace/WEB-INF/lib/portal-impl.jar!/system.properties
         [java] Loading jar:file:/Webspace_install_dir/var/webspace/war-workspace/sources/webspace/WEB-INF/lib/portal-impl.jar!/portal.properties
         [java] Loading jar:file:/Webspace_install_dir/var/webspace/war-workspace/sources/webspace/WEB-INF/lib/enterprise.jar!/portal-sun.properties
         [java] Loading jar:file:/Webspace_install_dir/var/webspace/war-workspace/sources/webspace/WEB-INF/lib/enterprise.jar!/portal-sun-tools.properties
         [java] Loading jar:file:/Webspace_install_dir/var/webspace/war-workspace/sources/webspace/WEB-INF/lib/portal-impl.jar!/captcha.properties
         [java] 2009-04-28 23:39:02,177 [main] INFO  com.liferay.portal.util.PortalImpl - Portal lib directory /opt/wsynergy/ws-root/var/webspace/war-workspace/sources/webspace/WEB-INF/lib/
         [java]   Expanding: /Webspace_install_dir/webspace/war-workspace/unprocessed/solr-web-5.2.0.1.war into /var/tmp/20090428233902276
         [java]   Copying 1 file to /var/tmp/20090428233902276/WEB-INF/classes
         [java]   Copying 1 file to /var/tmp/20090428233902276/WEB-INF/classes
         [java]   Building war: /var/tmp/20090428233903352
         [java] 2009-04-28 23:39:03,944 [main] INFO  com.liferay.portal.kernel.util.ServerDetector - Detected server tomcat
         [java]   Deleting directory /var/tmp/20090428233902276
         [echo] 
         [echo] Processed war is in /Webspace_install_dir/var/webspace/war-workspace/finals.
    
    BUILD SUCCESSFUL
    Total time: 33 seconds

Promoting a Portal to Production Environment

You can create a custom portal using Sun GlassFish Web Space Server. The following are the two scenarios for migrating to the production environment:

Moving From Development to Production

During the development phase a team of developers would collaborate and work on customization of Web Space Server. Customization might include creating or modifying portlets, hooks, themes, layouts and pages. Out of all these artifacts, the pages are stored in the database while others can be deployed in WAR file format. For the first time production cut-over, the database from the development environment can be exported and imported into the production environment.

The following is the process involved:

  1. Generate the WAR files for custom portlets, themes, layouts and hooks and deploy them to the production Web Space Server. These WAR files need to be placed in the hot deploy area.

  2. Export the database from the development environment.

  3. Import the database that has been exported in the previous step into the production environment.

Deploying Content From Staging to Production

In this scenario, the content can be developed in the staging environment and can be published to the production server. The WebSpace server provides a “Staging” feature for Community and Organization pages to address this requirement.

Essentially, the community and organization "staging" feature provides an option to deploy pages to the same server. Meaning, an administrator can review the page changes before publishing them and the publishing can be scheduled. In addition, a workflow can be attached to the process so the changes can be approved by an authority.

In the "Manage Pages" there is an option for "Publish to Remote" which would actually deploy the pages to a remote server. This can be the process for promoting the content that is developed in the "staging/development" environment to the production system.

Staging

You can “Stage” pages on the production server before they are published to live. This can also include the "approval process" of different levels of authority.

Here is how it works:

ProcedureTo Do Staging Without Workflow

  1. Log in to Web Space Server as admin user.

  2. Administrator or the owner of a community or an organization can enable Activate Staging via Manage Pages from the Control Panel.

  3. Once staging is enabled, a user with appropriate privileges can add/modify pages and the content to pages and publish them when they are finalized. During the development of the pages while in the staging, live pages are not affected, and no changes are visible on live pages.

ProcedureTo Do Staging With Workflow

  1. Log in to Web Space Server as admin user.

  2. Administrator or the owner of a community or organization can enable Activate Staging and Activate Workflow via Manage Pages from the Control Panel.

  3. Choose the number of stages for the Workflow. Default is 3.

  4. Specify the roles for each stage of approval process. Administrator can create the roles with scope and permissions on the My Community portlet. For Content Creator Community Role assign the Manage Pages role from the Define Permissions option.

  5. A Content Creator can create pages and submit the proposal. Once the pages are approved through the approval chain, the final approver can publish the page to live system.

Publishing to a Remote Server

Remote publishing allows publishing pages from a staging server (a staging server where the pages are approved and published locally) to production server. When using the remote publishing, the user who is publishing remotely must have a user account on both servers with the same email address and password.

The production server (where the pages are published to) must be configured to accept connections from the staging server.

For example, the following entry in portal-ext.properties would allow the remote publishing from IP address 192.18.123.38.

tunnel.servlet.hosts.allowed=127.0.0.1,SERVER_IP,192.18.123.38

If you have deployed the OpenSSO add-on, web.xml for the tunnel-web application should be configured in such a way that "Liferay Servlet Filter" and "Secure Liferay Servlet Filter" are using the default filter which is com.liferay.portal.servlet.filters.secure.SecureFilter instead of the filter that ships with the add-on which is com.sun.portal.servlet.filters.soo.accessmanager.BasicAuthFilter.

Offline Promotion of Content to Production

The remote publishing feature is very useful when staging and production environments are "connected". In case, they are not connected, meaning the network firewall prevents any connections from staging to production due to the company's security policy then the remote publishing will not be an option. In this event, the pages can be promoted to the production "offline".

After the pages and its content are finalized, an administrator can "export" the pages from staging environment which will generate a lar file. The same can be copied over to the production server and imported into its corresponding community/organization. The "Manage Pages" option for a community/organization contains a tab for "Export/Import" which allows an administrator to export or import the page including its content, permissions and the like.

Activating Staging, Activating Workflow, and Publishing Pages to Live

This section discusses the procedure for activating staging and workflow for Communities and Organizations, and publishing their pages to live.

Admin user can activate staging for Communities and Organizations. When you activate staging for Communities or Organizations, you can preview their pages and make changes to them before publishing them to live production environment.

For the procedure to create a new Community, see To add a Community. The following procedure explains how you can stage Communities. You can stage Organizations by following the similar procedure.

ProcedureTo Stage a Community and to Publish a Page to Live

  1. Log in to Sun GlassFish Web Space Server as admin user.

  2. Choose Add Application from the Welcome menu, and add My Communities portlet to your page.

  3. Click the Communities I Own tab on the My Communities portlet.

  4. To stage a Community, click the Actions button corresponding to a Community and choose Manage Pages from the menu.

    In this example, choose the 'cms' Community.

  5. Click the Settings tab, and enable the Activate Staging option.

    The community is staged to the production environment.

  6. Choose My Places from the Welcome menu and navigate to a page on the community.

    A live page for 'cms' is displayed.

  7. To view the staged page, choose Staging -> View Staged Page from the Welcome menu.

  8. To publish the page to live, choose Staging -> Publish to Live from the Welcome menu.

  9. To view the live page, choose Staging -> View Live Page from the Welcome menu.

    The Publish To Live window appears.

  10. Select the pages you want to publish and click Publish.

    A dialog box with the message “Are you sure you want to publish these pages?” appears.

  11. Click OK to publish the selected pages.

Clustering of Web Space Server

Once you have Web Space Server installed in more than one node on your application server, there are several optimizations that need to be made. At a minimum, Web Space Server should be configured in the following way for a clustered environment:


Note –

The default HSQL database can't be used in a clustered environment. You can configure MySQL or any other compatible database to use in a clustered environment.

A cluster setup need to use the enterprise version of GlassFish 2.1 patch 2 or later.


Many of these configuration changes can be made by adding or modifying properties in your portal-ext.properties file. Remember that this file overrides the defaults that are in the portal.properties file. The original version of this file can be found in the Liferay source code or can be extracted from the portal-impl.jar file in your Liferay installation. It is a best practice to copy the relevant section that you want to modify from portal.properties into your portal-ext.properties file, and then modify the values there.

Jackrabbit Sharing

Web Space Server uses Jackrabbit from Apache as its JSR-170 compliant document repository. By default, Jackrabbit is configured to store the documents on the local file system upon which Liferay is installed, in the Glassfish home/domains/domain1/webspace/jackrabbit folder. Inside this folder is Jackrabbit's configuration file, called repository.xml.

To simply move the default repository location to a shared folder, you do not need to edit Jackrabbit's configuration file. Instead, find the section in portal.properties labeled JCR and copy/paste that section into your portal-ext.properties file. One of the properties, by default, is the following:


jcr.jackrabbit.repository.root=${resource.repositories.root}/jackrabbit

Change this property to point to a shared folder that all of the nodes can see. A new Jackrabbit configuration file is generated in that location.

Note that because of file locking issues, this is not the best way to share Jackrabbit resources. If two people have logged in at the same time uploading content, you could encounter data corruption using this method, and therefore it is not used for a production system. Instead, to enable better data protection, you should redirect Jackrabbit into your database of choice. You can use a database for this purpose. This requires editing Jackrabbit's configuration file.

The default Jackrabbit configuration file has sections commented out for moving the Jackrabbit configuration into the database. This has been done to make it as easy as possible to enable this configuration. To move the Jackrabbit configuration into the database, simply comment out the sections relating to the file system and comment in the sections relating to the database. These by default are configured for a MySQL database. If you are using another database, you will likely need to modify the configuration, as there are changes to the configuration file that are necessary for specific databases. For example, the default configuration uses Jackrabbit's DbFileSystem class to mimic a file system in the database. While this works well in MySQL, it does not work for all databases. For example, if you are using an Oracle database, you will need to modify this to use OracleFileSystem. Please see the Jackrabbit documentation at http://jackrabbit.apache.org for further information.

You will also likely need to modify the JDBC database URLs so that they point to your database. Don't forget to create the database first, and grant the user ID you are specifying in the configuration file access to create, modify, and drop tables.

Once you have configured Jackrabbit to store its repository in a database, the next time you bring up Liferay, the necessary database tables will be created automatically. Jackrabbit, however, does not create indexes on these tables, and so over time this can be a performance penalty. To fix this, you will need to manually go into your database and index the primary key columns for all of the Jackrabbit tables.

All of your Liferay nodes should be configured to use the same Jackrabbit repository in the database. Once that is working, you can create a Jackrabbit cluster (please see the following section).

Lucene Configuration

Lucene, the search indexer which Web Space Server uses, can be in a shared configuration for a clustered environment, or an index can be created on each node of the cluster. If you wish to have a shared index, you will need to either share the index on the file system or in the database.

The Lucene configuration can be changed by modifying values in your portal-ext.properties file. Open your portal.properties file and search for the text Lucene. Copy that section and then paste it into your portal-ext.properties file.

If you wish to store the Lucene search index on a file system that is shared by all of the Web Space Server nodes, you can modify the location of the search index by changing the lucene.dir property. By default, this property points to the /webspace/lucene folder inside the home folder of the user running Web Space Server:


lucene.dir=${resource.repositories.root}/lucene/

Change this to the folder of your choice. To make the change take effect, you will need to restart Web Space Server. You can point all of the nodes to this folder, and they will use the same index.

Like Jackrabbit, however, this is not the best way to share the search index, as it could result in file corruption if different nodes try reindexing at the same time. A better way is to share the index via a database, where the database can enforce data integrity on the index. This is very easy to do; it is a simple change to your portal-ext.properties file.

There is a single property called lucene.store.type. By default this is set to go to the file system. You can change this so that the index is stored in the database by making it the following:


lucene.store.type=jdbc

The next time Web Space Server is started, new tables will be created in the Web Space Server database, and the index will be stored there. If all the Web Space Server nodes point to the same database tables, they will be able to share the index.

Alternatively, you leave the configuration alone, and each node will then have its own index. This ensures that there are no collisions when multiple nodes update the index, because they all will have separate indexes.

Hot Deploy

Plugins which are hot deployed will need to be deployed separately to all of the Web Space Server nodes. Each node should, therefore, have its own hot deploy folder. This folder needs to be writable by the user under which Web Space Server is running, because plugins are moved from this folder to a temporary folder when they are deployed. This is to prevent the system from entering an endless loop, because the presence of a plugin in the folder is what triggers the hot deploy process.

When you want to deploy a plugin, copy that plugin to the hot deploy folders of all of the Web Space Server nodes. The hot deploy directory for Web Space Server when running on GlassFish is Glassfish home/domains/domain1/webspace/deploy. Depending on the number of nodes, it may be best to create a script to do this. Once the plugin has been deployed to all of the nodes, you can then make use of it (by adding the portlet to a page or choosing the theme as the look and feel for a page or page hierarchy).

Some containers contain a facility which allows the end user to deploy an application to one node, after which it will get copied to all of the other nodes. If you have configured your application server to support this, you won't need to hot deploy a plugin to all of the nodes, as your application server will handle it transparently. Make sure, however, that you use hot deploy mechanism to deploy plugins, as in many cases Web Space Server slightly modifies plugin.war files when hot deploying them.

Configuring Jackrabbit With MySQL

Liferay includes Jackrabbit by default as its JSR-170 Java Content Repository.

Image Gallery and Document Library portlets use jackrabbit to store data. Jackrabbit stores CMS (Content Management System) data in a file system. The following procedure explains how to configure Jackrabit to use MySQL database to store the data from Image Gallery and Document Library portlets.

ProcedureTo Configure Jackrabbit With MySQL

  1. Add the following properties to the portal-ext.properties file.

    jcr.initialize.on.startup=true
    jcr.jackrabbit.repository.root=/jackrabbit 
    jcr.jackrabbit.config.file.path=/jackrabbit/repository.xml
    dl.hook.impl=com.liferay.documentlibrary.util.JCRHook

    The Web Space Server evaluation bundle has a portal-ext.properties file in the GlassFish install-dir/domains/domain1/applications/j2ee-modules/webspace/WEB-INF/classes. When you are using a Web Space Server bundle not including samples, you have to create a portal-ext.properties file under ROOT-DIR/webspace-for-gfv2/var/webspace/war-workspace/customs/webspace/WEB-INF/classes.

  2. Make changes to the repository.xml file residing under GlassFish install-dir/webspace-gfv2-OS/var/webspace/data/jackrabbit.

    Most generally, when you are configuring Jackrabbit for MySQL, you will have to remove the commenting for all the markup related with MySQL. For other databases, For other databases, you will have to change the connection credentials and schema settings.

    Modified repository.xml may look like this:

    <?xml version="1.0"?>
    
    <Repository>
    	<!--<FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
    		<param name="path" value="${rep.home}/repository" />
    	</FileSystem>-->
    
    	<!--
    	Database File System (Cluster Configuration)
    
    	This is sample configuration for mysql persistence that can be used for
    	clustering Jackrabbit. For other databases, change the connection,
    	credentials, and schema settings.
    	-->
    
    	<FileSystem class="org.apache.jackrabbit.core.fs.db.DbFileSystem">
    		<param name="driver" value="com.mysql.jdbc.jdbc2.optional.MysqlDataSource"/>
    		<param name="url" value="jdbc:mysql://nicp239.india.sun.com:3306/lportal?useUnicode=true&#038;amp&#059;characterEncoding=UTF-8" />
    		<param name="user" value="root" />
    		<param name="password" value="password" />
    		<param name="schema" value="mysql"/>
    		<param name="schemaObjectPrefix" value="J_R_FS_"/>
    	</FileSystem>
    
    	<Security appName="Jackrabbit">
    		<AccessManager class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
    		<LoginModule class="org.apache.jackrabbit.core.security.SimpleLoginModule">
    			<param name="anonymousId" value="anonymous" />
    		</LoginModule>
    	</Security>
    	<Workspaces rootPath="${rep.home}/workspaces" defaultWorkspace="default" />
    	<Workspace name="${wsp.name}">
    		<!--<FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
    			<param name="path" value="${wsp.home}" />
    		</FileSystem>
    		<PersistenceManager class="org.apache.jackrabbit.core.persistence.bundle.BundleFsPersistenceManager" />-->
    
    		<!--
    		Database File System and Persistence (Cluster Configuration)
    
    		This is sample configuration for mysql persistence that can be used for
    		clustering Jackrabbit. For other databases, change the  connection,
    		credentials, and schema settings.
    		-->
    
    		<PersistenceManager class="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
    			<param name="driver" value="com.mysql.jdbc.jdbc2.optional.MysqlDataSource"/>
    	                <param name="url" value="jdbc:mysql://nicp239.india.sun.com:3306/lportal?useUnicode=true&#038;amp&#059;characterEncoding=UTF-8" />
    			<param name="user" value="root" />
    			<param name="password" value="password" />
    			<param name="schema" value="mysql" />
    			<param name="schemaObjectPrefix" value="J_PM_${wsp.name}_" />
    			<param name="externalBLOBs" value="false" />
    		</PersistenceManager>
    		<FileSystem class="org.apache.jackrabbit.core.fs.db.DbFileSystem">
    			<param name="driver" value="com.mysql.jdbc.jdbc2.optional.MysqlDataSource"/>
    	                <param name="url" value="jdbc:mysql://nicp239.india.sun.com:3306/lportal?useUnicode=true&#038;amp&#059;characterEncoding=UTF-8" />
    			<param name="user" value="root" />
    			<param name="password" value="password" />
    			<param name="schema" value="mysql"/>
    			<param name="schemaObjectPrefix" value="J_FS_${wsp.name}_"/>
    		</FileSystem>
    	</Workspace>
    	<Versioning rootPath="${rep.home}/version">
    		<!--<FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
    			<param name="path" value="${rep.home}/version" />
    		</FileSystem>
    		<PersistenceManager class="org.apache.jackrabbit.core.persistence.bundle.BundleFsPersistenceManager" />-->
    
    		<!--
    		Database File System and Persistence (Cluster Configuration)
    
    		This is sample configuration for mysql persistence that can be used for
    		clustering Jackrabbit. For other databases, change the connection,
    		credentials, and schema settings.
    		-->
    
    		<FileSystem class="org.apache.jackrabbit.core.fs.db.DbFileSystem">
    			<param name="driver" value="com.mysql.jdbc.jdbc2.optional.MysqlDataSource"/>
                            <param name="url" value="jdbc:mysql://nicp239.india.sun.com:3306/lportal?useUnicode=true&#038;amp&#059;characterEncoding=UTF-8" />
    			<param name="user" value="root" />
    			<param name="password" value="password" />
    			<param name="schema" value="mysql"/>
    			<param name="schemaObjectPrefix" value="J_V_FS_"/>
    		</FileSystem>
    		<PersistenceManager class="org.apache.jackrabbit.core.state.db.SimpleDbPersistenceManager">
    			<param name="driver" value="com.mysql.jdbc.jdbc2.optional.MysqlDataSource"/>
                            <param name="url" value="jdbc:mysql://nicp239.india.sun.com:3306/lportal?useUnicode=true&#038;amp&#059;characterEncoding=UTF-8" />
    			<param name="user" value="root" />
    			<param name="password" value="password" />
    			<param name="schema" value="mysql" />
    			<param name="schemaObjectPrefix" value="J_V_PM_" />
    			<param name="externalBLOBs" value="false" />
    		</PersistenceManager>
    	</Versioning>
    
    	<!--
    	Cluster Configuration
    
    	This is sample configuration for mysql persistence that can be used for
    	clustering Jackrabbit. For other databases, change the  connection,
    	credentials, and schema settings.
    	-->
    
        <!--<Cluster id="node_1" syncDelay="5">
    		<Journal class="org.apache.jackrabbit.core.journal.DatabaseJournal">
    			<param name="revision" value="${rep.home}/revision"/>
    			<param name="driver" value="com.mysql.jdbc.Driver"/>
    			<param name="url" value="jdbc:mysql://localhost/jcr"/>
    			<param name="user" value=""/>
    			<param name="password" value=""/>
    			<param name="schema" value="mysql"/>
    			<param name="schemaObjectPrefix" value="J_C_"/>
    		</Journal>
        </Cluster>-->
    </Repository>
  3. Deploy webspace.war.

    To deploy webspace.war, place it under GlassFish install-dir/domains/domain1/autodeploy and restart GlassFish.

  4. Add some documents through the Document Library portlet. If you see the lportal database, the Document Library data is stored in the following tables:

    • J_V_PM_BINVAL

    • J_V_PM_NODE

    • J_V_PM_PROP

    • J_V_PM_REFS

Installing Plugins

Web Space Server comes with two portlets which can handle plugin installation: the Plugin Installer and the Update Manager. The Update Manager helps to determine if you are running the most recent version of a plugin.

You can add the Update Manager portlet to your page by clicking Add Application from the welcome dock. The Update Manager displays which plugins are already installed on the system, what their version numbers are, and whether an update is available.

To install a plugin from the Update Manager, click the Install More Plugins button. It invokes the Plugin Installer portlet, and by default you are on the Portlet Plugins tab. You can install or uninstall the portlets available in the repository. If your server is firewalled, you may not see any plugins in the repository, and you need to install plugins manually. To install plugins manually, clickthe Upload File tab. You can browse the WAR file for a layout template, portlet, or a theme that you want to install. You can specify the deployment context in a text box for easy identification of the portlet. Click the Install button to install the portlet.

If you do not wish to use the Update Manager or Plugin Installer to deploy plugins, you can also deploy them at the operating system level. The first time Web Space Server starts, it creates a hot deploy folder which is by default created inside the home folder of the user who launched Web Space Server. For example, say that on a Linux system, the user lportal was created in order to run Web Space Server. The first time Web Space Server is launched, it will create a folder structure in /home/lportal/webspace to house various configuration and administrative data. One of the folders it creates is called deploy. If you copy a portlet or theme plugin into this folder, Liferay will deploy it and make it available for use just as though you'd installed it via the Update Manager or Plugin Installer. In fact, this is what the Update Manager and Plugin Installer portlets are doing behind the scenes.

You can change the defaults for this directory structure so that it is stored anywhere you like by modifying the appropriate properties in your portal-ext.properties file.

Creating a Custom Plugin Repository

As your enterprise builds its own library of portlets for internal use, you can create your own plugin repository to make it easy to install and upgrade portlets. This will allow different departments running different instances of Web Space Server to share portlets and install them as needed. If you are a software development house, you may wish to create a plugin repository for your own products. Web Space Server makes it easy for you to create your own plugin repository and make it available to others.

You can create your plugin repository using the Software Catalog portlet. This method allows users to upload their plugins to an HTTP server to which they have access. They can then register their plugins with the repository by adding a link to it via the portlet's graphical user interface. Web Space Server will then generate the XML necessary to connect the repository to a Plugin Installer portlet running another instance of Web Space Server. This XML file can then be placed on an HTTP server, and the URL to it can be added to the Plugin Installer, making the portlets in this repository available to the server running Web Space Server.

Using the Software Catalog Portlet

The Software Catalog portlet is not an instanceable portlet, which means that each community can have only one instance of the portlet. If you add the portlet to another page in the community, it will hold the same data as the portlet that was first added. Different communities, however, can have different software repositories, so you can host several software repositories on the same instance of Web Space Server if you wish they just have to be in different communities.

The Software Catalog portlet has several tabs. The first tab is labeled Products. The default view of the portlet, when populated with software, displays what plugins are available for install or download. This can be seen in the version on Web Space Server's home page.

The first step in adding a plugin to your software repository is to add a license for your product. A license communicates to users the terms upon which you are allowing them to download and use your software. Click the Licenses tab and then click the Add License button that appears. You will then see a form which allows you to type the title of your license, a URL pointing to the actual license document, and check boxes denoting whether the license is open source, active, or recommended.

When you have finished filling out the form, click the Save button. Your license will be saved. Once you have at least one license in the system, you can begin adding software products to your software catalog. Your next step will be to create the product record in the software catalog portlet. This will register the product in the software catalog and allow you to start adding versions of your software for users to download and/or install directly from their instances of Web Space Server. You will first need to put the .war file containing your software on a web server that is accessible without authentication to the users who will be installing your software. If you are creating a software catalog for an internal Intranet, you would place the file on a web server that is available to anyone inside your organization's firewall.

To create the product record in the Software Catalog portlet, click the Products tab, and then click the Add Product button. Fill out the form with information about your product.

Figure 8–1 Adding a Product to the Software Catalog (partial)

Adding a Product to the Software Catalog (partial)

Name: The name of your software product.

Type: Select whether this is a portlet or a theme plugin.

Licenses: Select the license(s) under which you are releasing this software.

Author: Type the name of the author of the software.

Page URL: If the software has a home page, type its url here.

Tags: Type any tags you would like added to this software.

Short Description: Type a short description. This will be displayed in the summary table of your software catalog.

Long Description: Type a longer description. This will be displayed on the details page for this software product.

Permissions: Click the Configure link to set permissions for this software product.

Group ID: Type a group ID. A group ID is a name space which usually identifies the company or organization that made the software. For example, use old-computers.

Artifact ID: Type an Artifact ID. The artifact ID is a unique name within the name space for your product. For example, use my-summary-portlet.

Screenshot: Click the Add Screenshot button to add a screenshot of your product for users to view.

When you have finished filling out the form, click the Save button. You will be brought back to the product summary page, and you will see that your product has been added to the repository.

Notice that in the version column, N/A is being displayed. This is because there are not yet any released versions of your product. To make your product downloadable, you need to create a version of your product and point it to the file you uploaded to your HTTP server earlier.

Before you do that, however, you need to add a Framework Version to your software catalog. A Framework version denotes what version of Web Space Server your plugin is designed for and works on. You cannot add a version of your product without linking it to a version of the framework for which it is designed.

Why is this so important? Because as Web Space Server gains more and more features, you may wish to take advantage of those features in future versions of your product, while still keeping older versions of your product available for those who are using older versions of Web Space Server.

So click the Framework Versions tab and then click the Add Framework Version button. Give the framework a name, a URL, and leave the Active check box checked.

Now go back to the Products tab and click your product. You will notice that a message is displayed stating that the product does not have any released versions. Click the Add Product Version button.


Note –

It is a must to specify a group ID and artifact ID before you specify a product version. You can specify the group ID and artifact ID for the product from the Product Version page by clicking on the It is a must to specify a group ID and artifact ID before you specify a product version link, which appears in the Product Version page if a group ID and artifact ID are not specified for the product.


Figure 8–2 Adding a Product Version to the Software Catalog

Adding a Product Version to the Software Catalog

Version Name: Type the version of your product.

Change Log: Type some comments regarding what changed between this version and any previous versions.

Supported Framework Versions: Select the framework version for which your software product is intended.

Download Page URL: If your product has a descriptive web page, type its URL here.

Direct Download URL (Recommended) : Type a direct download link to your software product here. The Plugin Installer portlet will follow this link in order to download your software product.

Include Artifact in Repository: To enable others to use the Plugin Installer portlet to connect to your repository and download your plugin, select Yes here.

When you are finished filling out the form, click the Save button. Your product version will be saved, and your product will now be available in the software repository.