Administrator Guide

     Previous  Next    Open TOC in new window  Open Index in new window  View as PDF - New Window  Get Adobe Reader - New Window
Content starts here

Managing Portal Content

This chapter explains the design of managed content availability in the portal and provides the steps you take to make content available to users. The chapter includes the following topics:

 


About Portal Content

The portal is designed to enable users to discover all of the enterprise content related to their employee role by browsing or searching portal areas.

Portal users should be able to assemble a My Page that provides access to all of the information they need. For example, to write user documentation, technical writers need to be able to assemble a My Page that includes portlet- or community-based access to documentation standards and conventions, solution white papers, product data sheets, product demonstrations, design specifications, release milestones, test plans, and bug reports, as well as mail-thread discussions that are relevant to customer support and satisfaction. To perform their role, technical writers do not need access to the personnel records that an HR employee or line-manager might require, or to the company financial data that the controller or executive staff might need, for example. A properly designed enterprise portal, then, would reference all of these enterprise documents so that any employee performing any function can access all of the information they need; but a properly designed enterprise portal would also ensure that only the employee performing the role can discover the information.

To enable such managed access to enterprise content:

This chapter describes the following tasks you complete to enable managed discovery of enterprise content through the portal:

 


Configuring Content Types and Document Properties

This section describes how to configure the content type objects and document properties objects that enable document filters used by the Knowledge Directory, content crawlers, the Smart Sort utility, and the Search Service. Filters and returned search results are based on the associated portal properties, not properties defined in the source document.

When you add documents to the portal, the portal maps source document fields to portal properties according to mappings you specify in the Global Content Type Map, the particular content type definition, the Global Document Property Map, and any content crawler-specific content type mappings.

To enable content type and property mapping:

  1. Configure the Global Content Type Map.
  2. Add and configure additional content types, as needed.

    For details, see Configuring the Global Content Type Map

  3. Configure the Global Document Property Map.
  4. Add and configure additional document properties, as needed.

    For details, see Configuring the Global Document Property Map

Configuring the Global Content Type Map

The Global Content Type Map allows you to map source document identifiers (for example, file extensions) to content types. The content type associated with a source document determines how metadata in the source document is mapped to portal properties.

To configure the Global Content Type Map:

  1. Click Administration.
  2. In the Select Utility drop-down list, choose Global Content Type Map.
  3. Configure identifiers for content types as described in the online help.
  4. Click Finish.

To create a new content type:

  1. Click Administration.
  2. In the Create Object drop-down list, choose Content Type.
  3. Select an appropriate Accessor, configure a property map, and specify default behavior for populating portal properties from the source documents as described in the online help.
  4. Click Finish.

Configuring the Global Document Property Map

The Global Document Property Map provides default mappings for properties common to the documents in your portal. These mappings are applied after the mappings in the content type.

When you import a document into the portal, the portal performs the following actions:

  1. The portal determines which content type to use, based on the Global Content Type Map or the content type settings for the content crawler.
  2. The portal maps source properties to portal property values based on the mappings in the content type.
  3. If the Global Content Type Map includes properties that are not in the source documents, the portal populates the portal property values for these documents based on defaults you configure.
  4. Click Finish.

To configure the Global Document Property Map:

  1. Click Administration.
  2. In the Select Utility drop-down list, choose Global Document Property Map.
  3. Create mappings between portal properties and document attributes as described in the online help.
  4. Click Finish.

To create new properties:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Property.
  4. Configure the property settings as described in the online help.
  5. Click Finish.

Mapping HTML Page Properties

Generally, you will be able to determine what source document attributes can be mapped to portal properties, but this might not be as clear in HTML documents. Table 4-1 provides suggestions for mapping HTML attributes to portal properties.

The HTML Accessor handles all common character sets used on the Web, including UTF-8.

Table 4-1 Mapping HTML Attributes to Portal Properties 
HTML Attribute
Portal Property
<Title> Tag
The HTML<TITLE> tag maps to the portal property Title.
<Meta> Tag
You can add property information to an HTML page using the <META> tag, as shown in this example:
<HTML>
<HEAD>
<TITLE>Press Release - Company X Promotes Five Vice Presidents and Elects Six New Corporate Officers </TITLE>
<META NAME="corporate_information_class" CONTENT="Press Relations">
<META NAME="creation_date" CONTENT="18-Jan-2004"> 
<META NAME="stop_date" CONTENT="18-Jan-2005">
<META NAME="next_check_date" CONTENT="18-Jan-2005"> 
<META NAME="last_check_date" CONTENT="18-Jan-2004"> 
<META NAME="web_author_id" CONTENT="ktstatha">
<META NAME="language" CONTENT="English"> 
<META NAME="country" CONTENT="USA">
</HEAD>
Using this meta information and setting up the appropriate content type allows content crawlers and filters to be much more effective. For example, you could map the <META> tag creation_date to the portal property Created; this allows you to automatically sort documents into the correct monthly folder, such as Jan 2004.
Headline Tags
The Accessor returns a value for each headline tag (<H1>, <H2>, <H3>, <H4>, <H5>, and <H6>) and each bold tag (<B>). The attribute name returned by the Accessor is the name of the tag followed by an ordinal, one-based index in parentheses, and the value is the contents of the tag. For example, an HTML document contains:
<H1>Value 1</H1>
<H3>Value 2</H3>
<H1>Value 3</H1>
<B>Value 4</B>
The HTML Accessor returns the following source document attribute-value pairs:
<h1>(1)		 Value 1
<h3>(1)		 Value 2
<h1>(2)		 Value 3
<B>(1)		 Value 4
If on a particular news site, the second <H2> tag contains the name of the article and the third <B> tag contains the name of the author, you could map the portal property Title to <H2>(2) and the portal property Author to <B>(3).
HTML Comments
It has become a common practice to store metadata in HTML comments using the following format:
<!-- Writer: jm -->
<!-- AP: md -->
<!-- Copy editor: mr -->
<!-- Web editor: ad -->
In other words, the format is the HTML comment delimiter followed by the name, a colon, the value, and a close comment delimiter. The HTML Accessor parses data in this format and returns source document attribute-value pairs:
Writer jm
AP md
Copy editor mr
Web editor ad
Parent URL
Documents imported via Web crawl return an attribute named Parent URL with the value of the URL of the parent page that contains a link to the document.
Anchors
The HTML Accessor provides special handling for internal anchors
(<a name="target">) and URLs that reference them (http://server/page#target). You might map anchors to portal attributes in the following ways:
  • Alternate Sources for the portal Title attribute
  • When the document URL for an HTML document contains a fragment identifier (for example, #target in the example above) and the Accessor finds that anchor in the document, it discards all title and headline tags preceding the anchor and returns, as the suggested document title, the first subsequent headline tag. All subsequent tags are indexed relative to the anchor tag, so mapping a property to <H1>(2) means "use the second <H1> tag after the anchor tag named in the document URL."

  • Mapping Anchor Section to Document Description or Summary
  • The HTML Accessor returns an attribute named Anchor Section containing text immediately following the named anchor tag (stripped of markup tags and HTML decoded). Mapping this property to the document description allows the portal to generate a relevant description for each section of a large document.

    The HTML Accessor generates its own summary by returning the first summary-sized chunk of text in the document stripped of HTML markup tags and correctly HTML decoded. It returns this summary as an attribute named Summary.

    The Accessor executes the DocumentSummary method, which returns the value of the Anchor Section attribute, if available. If this attribute is not available, its second choice is the value of the Description attribute from the <META NAME="description"> tag. If this is not available, its third and final choice is the Summary attribute.

 


Configuring Content Sources

This section describes how to configure content sources that enable portal access to content on WWW locations, file systems, and back-end content servers. This section includes the following topics:

About Content Sources

Content sources provide access to external content repositories, allowing users and content crawlers to add document records and links in the Knowledge Directory. For example, a content source for a secured Web site can be configured to fill out the Web form necessary to gain access to that site.

Register a content source for each secured Web site or back-end repository from which content can be imported into your portal.

Content Source Histories

Content sources keep track of what content has been imported, deleted, or rejected by content crawlers accessing the content source. It keeps a record of imported files so that content crawlers do not create duplicate links. To prevent multiple copies of the same link being imported into your portal, set multiple content crawlers that are accessing the same content source to only import content that has not already been imported.

Content Sources and Security

Because a content source accesses secured documents, you must secure access to the content source itself. Content sources, like everything in the portal, have security settings that allow you to specify exactly which portal users and groups can see the content source. Users that do not have Read access to a content source cannot select it, or even see it, when submitting content or building a content crawler.

Using Content Sources and Security to Control Access

You can create multiple content sources that access the same repository of information. For example, you might have two Web content sources accessing the same Web site. One of these content sources could access the site as an executive user that can see all of the content on the site. The other content source would access the site as a managerial user that can see some secured content, but not everything. You could then grant executive users access to the content source that accesses the Web site as an executive user, and grant managerial users access to the content source that accesses the Web site as a managerial user.

Note: If you crawled the same repository using both of these content sources, you would import duplicate links into your portal. Refer to Content Source Histories.

Configuring Web Content Sources

Web content sources allow users to import content from the Web into the portal through Web content crawlers or Web document submission. When you install the portal, the World Wide Web content source is created. This content source provides access to any unsecured Web site.

To create a Web content source:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Content Source - WWW.
  4. When prompted, select the Web service for the World Wide Web.
  5. Define your Web content source as described in the online help.
  6. Click Finish.

Configuring Remote Content Sources

Remote content sources allow users to import content from an external content repository into the portal through remote content crawlers or remote document submission.

The following table describes the steps you take to configure a remote content source.

Table 4-2 Steps to Configure a Remote Content Source 
Basic Step
Procedure
Create a remote server to use for both user document submission and content crawlers. If you are using an AquaLogic Interaction Content Service, you can import the remote server when you register the AquaLogic Interaction Content Service with the portal. For details, refer to the documentation provided with the software.
  1. Click Administration.
  2. Navigate to an existing administrative folder or create a new one in which to store the portal objects needed for importing content.
  3. In the Create Object drop-down list, select Remote Server.
  4. Configure connection information for the remote server as described in the online help.
  5. Click Finish.
Create a Web service to use for both user document submission and content crawlers. If you are using an AquaLogic Interaction Content Service, you can import the Web service when you register the AquaLogic Interaction Content Service with the portal. For details, refer to the documentation provided with the software.
  1. In the Create Object drop-down list, select Web Service - content.
  2. Configure connection information for the Web service as described in the online help.
  3. Click Finish.
Configure a remote content source.
  1. In the Create Object drop-down list, click Content Source - Remote.
  2. Define the remote content source as described in the online help.
  3. Click Finish.

 


Managing the Knowledge Directory

This section describes how to set up and manage the portal Knowledge Directory. It includes the following topics:

About the Knowledge Directory

The Knowledge Directory is a portal area that users can browse to discover documents that have been uploaded by users or imported by content crawlers. This information is organized into subfolders in a manner similar to file storage volumes and shares, but you might want to organize it in a more granular fashion to allow you to delegate administrative responsibility and facilitate managed access with ACLs.

The default portal installation includes a Knowledge Directory root folder with one subfolder named Unclassified Documents. Before you create additional subfolders, define a taxonomy, as described in the Deployment Guide for BEA AquaLogic User Interaction G6.

Setting Knowledge Directory Preferences

You can specify how the Knowledge Directory displays documents and folders, including whether to generate the display of contents from a Search Service search or a database query, by setting Knowledge Directory preferences.

To set Knowledge Directory preferences:

  1. Click Administration.
  2. In the Select Utility drop-down list, click Knowledge Directory Preferences.
  3. Specify preferences as described in the online help.
  4. Click Finish.

Creating Folders

To create a Knowledge Directory folder:

  1. Click Directory | Edit Directory.
  2. Navigate to the folder into which you want to place a new subfolder.
  3. Click Content Crawler Flow ChartContent Crawler Flow Chart. to launch the Folder Editor.
  4. Provide a name and description and click OK.
  5. Click the Edit Details iconContent Crawler Flow Chart and complete the settings as described in the online help.
  6. If you want to modify the ACL that is inherited from the parent folder by default, click Security.

Submitting Documents

To submit (upload) a document:

  1. Browse to the folder where you want to place the document.
  2. From the Submit Document drop-down list, choose Simple Submit or choose a content source.
  3. Complete the submission forms as described in the online help.

Controlling Document Placement with Filters

Use filters to control what content goes into which folder when crawling in documents or using Smart Sort. A filter sets conditions to sort documents into associated folders in the Knowledge Directory.

A filter is a combination of a basic fields search and statements. The basic fields search operates on the name, description, and full-text content fields associated with documents. Statements can operate on both the content and the properties of documents. Statements can be grouped together in groupings. Groupings are containers for statements or other groupings allowing you to create complex filters. Groupings are analogous to parentheses in mathematical equations.

A single filter can be used by multiple folders. You can also apply multiple filters to one folder.

Creating Filters

To create a filter:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Filter.
  4. Define your filter as described in the online help.
  5. Click Finish.

Assigning Filters to a Folder

After you create a filter, you assign it to folders. You can assign filters to any Knowledge Directory folder to which you have the appropriate access. If you assign more than one filter to a folder, you must specify whether content must pass all filters, or at least one filter.

To assign a filter to a folder:

  1. Click Directory | Edit Directory.
  2. Navigate to the folder you want to assign the filter to and click Content Crawler Flow Chart next to that folder. This launches the Folder Editor.
  3. In the filters section, click Content Crawler Flow ChartContent Crawler Flow ChartAdd Filter.
  4. In the Add Filter dialog box, expand the folders, as necessary, and select any filters you want to add to this folder.
  5. When you are finished adding filters, click OK.
  6. In the Folder Editor, click Finish.

Any content that passes the filters of the destination folder, but does not pass the filters of the destination folder's subfolders can be placed into a default folder.

To specify a default folder:

  1. Click Directory | Edit Directory.
  2. Navigate to the folder and click the Edit Details icon Content Crawler Flow Chart next to that folder.
  3. In the Default Folder drop-down list, choose where you want to store documents that do not pass the filters of any subfolders. If this folder does not have any subfolders, the default folder will be this folder. If this folder has subfolders, you can choose whether to make this folder or one of the subfolders the default folder.
  4. Click Finish.

Using Filters to Organize Crawled Content

You can organize crawled content into subcategories by creating a folder in the Knowledge Directory, and using filters on that folder's subfolders. For example, you can create a content crawler that crawls a news Web site and places content into a folder, then use filters on that folder's subfolders to separate the content into Politics, Sports, and Travel.

Note: For information on content crawlers, see About Content Crawlers.

To use filters to organize content using this example:

  1. Create a folder in the Knowledge Directory and call it News.
  2. Create three subfolders under News called: Politics, Sports, and Travel.
  3. Create a content crawler that crawls the news Web site, and choose the News folder as its destination folder.
  4. Create a filter to assign to each of the subfolders. Each filter should limit the content of the associated folder to the desired news category: politics, sports, or travel.
  5. Assign each filter to the appropriate subfolder. When content is crawled in from the news site into the News folder, it will automatically be filtered into the appropriate subfolder according to the filters you created and assigned to those subfolders.

Sorting Content into Folders with the Smart Sort Utility

You can use the Smart Sort Utility to redistribute content in your portal from one folder to another, applying filters according to your needs.

To redistribute content with the Smart Sort Utility:

  1. Click Administration.
  2. From the Select Utility drop-down list, choose Smart Sort.
  3. Configure source and destination details as described in the online help.

Maintaining Document Links

The Document Refresh Agent is an intrinsic job that updates the document links in the Knowledge Directory. The Document Refresh Agent visits every link in your portal. For each link, the Document Refresh Agent first determines if the link requires refreshing based on the setting for the document record that was imported into the Knowledge Directory. If the link requires refreshing, the Document Refresh Agent looks at the source document. If the source document has changed, any changed content is updated in the search index, and, optionally, the portal properties are regenerated from the source document. For example, if someone adds a line to the source document or changes the author, as soon as the link is refreshed, portal users can locate the document by searching for this new line of text or searching for the new author.

The Document Refresh Agent also deletes links with missing source documents and links that have expired.

You should run the Document Refresh Agent as frequently as you expect your links to require updates. The Document Refresh Agent knows if other copies of the agent are running and will distribute the work across these agents. However, the more copies of this agent that are running, the more CPU cycles that are used by the Automation Service, so you should limit the number of agents to fit your CPU resources.

For more information on the Document Refresh Agent job, see Running Portal Agents.

To examine the refresh settings for a document:

  1. Click Directory | Edit Directory.
  2. Navigate to the document, select it, and click Content Crawler Flow ChartDocument Settings. The options on the Document Settings page determine how the document is refreshed.

 


Extending Portal Services with Portlets

This section describes how to set up and manage the availability of portlets. It includes the following topics:

About Portlets

Portlets provide customized tools and services, as well as information. The portal comes with many portlets, but you can also create your own, have a Web developer or an AquaLogic User Interaction portlet developer create portlets for you, or download portlets from the AquaLogic User Interaction Support Center.

For information on installing and configuring portlets provided as a software package, refer to the portlet software documentation instead of the procedures in this guide.

For information on developing portlets, see the BEA AquaLogic User Interaction Development Center ( http://dev2dev.bea.com/aluserinteraction/).

There are several steps involved in making a portlet available for users to add to My Pages or community pages:

  1. Install the portlet software.
  2. Create a remote server and portlet Web service to define the functional settings.
  3. Optionally, create a portlet template to define display settings on which to base multiple portlets.
  4. Create a portlet to define the portlet display settings.
  5. Optionally, add portlets to a portlet bundle to allow users to easily add groups of related portlets to My Pages or community pages.

Portlet Characteristics

The following table describes some of the characteristics of portlets you might use in your deployment.

Table 4-3 Some Characteristics of Portlets 
Characteristic
Description
Intrinsic or Remote
  • Intrinsic portlets are included with the default portal and are installed on the computer that hosts the portal application.
  • Remote portlets extend the base functionality of the default portal and are hosted on a remote server. When a user displays a My Page or community page that includes a remote portlet, the portal contacts the remote server via HTTP to obtain updated portlet content.
Community or Personal
  • Community portlets can be added only to community pages. You can also create portlets through the Community Editor; these portlets can be used only in that community.
  • Personal portlets can be added to My Pages or community pages.
Narrow or Wide
  • Narrow portlets can be added to narrow or wide columns. Columns extend to fit portlet content; therefore, if you choose Narrow for a portlet that produces wide content, your portal might look awkward.
  • Wide portlets can be added only to wide columns.
Header, Footer, or Content Canvas
  • Header portlets can be added to communities, community templates, and experience definitions to change the branding of these objects by replacing the banner at the top of the page (so that it differs from the top banner displayed by the main portal).
  • Footer portlets can be added to communities, community templates, and experience definitions to change the branding of these objects by replacing the banner at the bottom of the page (so that it differs from the bottom banner displayed by the main portal).
  • Content canvas portlets can be added below the top banner on community pages that include a content canvas space in the page layout. You cannot add more than one content canvas portlet per page.

Note: Header, footer, and content canvas portlets and portlet templates are included with the default portal.

Using Portlets for Navigation and Login

AquaLogic Interaction provides tags that can be used in portlets as an easy way for developers to customize navigation and login components (such as name field, login field, and so on). Two portlets are included in your portal to provide examples of using these tags:

For more information on portal navigation, see Navigation Options.

For more information on using tags, see the BEA AquaLogic User Interaction Development Center ( http://dev2dev.bea.com/aluserinteraction/).

Using Portlets to Access Existing Web Applications

You can enable users to access existing Web applications through the portal. For example, users may need to access an employee benefits system. If they access the benefits system through the portal, they do not have to enter their login credentials separately for that application, and can continue to have the convenience of the portal context, personalization, and navigation.

To surface an existing application through the portal:

  1. (Recommended) Create a lockbox in the portal for the existing application, and have users supply their login credentials for that lockbox.
  2. To create a lockbox:

    1. Click Administration.
    2. In the Select Utility drop-down list, click Credential Vault Manager.
    3. Click New Lockbox and create a lockbox as described in the online help.
    4. Click Finish to close the Credential Vault Manager.
    5. To supply login credentials for lockboxes, users do the following:

    6. Click My Account.
    7. Click Password Manager.
    8. For each application listed (corresponding to a lockbox), enter username and password used to access that application.
    9. Click Finish.
  3. Create a remote server in the portal for the existing application:
    1. Click Administration.
    2. Navigate to or create the administrative folder for this server.
    3. In the Create Object drop-down list, select Remote Server.
    4. Configure connection information for the remote server as described in the online help.
    5. Click Finish.
  4. Create a remote portlet Web service in the portal to associate with a portlet you will create for the existing application:
    1. Click Administration.
    2. Navigate to or create the administrative folder for this Web service.
    3. In the Create Object drop-down list, select Web Service - Remote Portlet.
    4. Associate the Web service with the remote server you created in the previous step. Configure the rest of the Web service as described in the online help. You can use the lockbox you created for this application to supply the user credentials for authenticating to this application.
    5. To display the existing application's content in the entire area between the portal header and footer, choose Hosted Display Mode in the HTTP Configuration page of the Web Service Editor. This allows users to see a larger view of the application while preserving portal navigation. Otherwise, the content is displayed within the portlet you will create for this application.
    6. Click Finish.
  5. Create a portlet based on the above Web service:
    1. Click Administration.
    2. Navigate to or create the administrative folder for this portlet.
    3. In the Create Object drop-down list, select Portlet.
    4. In the Choose Template or Web Service dialog box, select the Web service you created in the previous step, and click OK.
    5. Configure the portlet as described in the online help.
    6. Click Finish.
  6. Add the portlet to My Pages or communities.
  7. You can let users add the portlet on their own (My Pages | Add Portlets or
    My Communities | Add Portlets), or you can make the portlet mandatory. See Defining Mandatory Portlets.

Portlet Content Caching

Caching some portlet content can greatly improve the performance of your portal. When you cache portlet content, the content is saved on the portal for a specified period of time. Each time a user requests this content—by accessing a My Page or community page that includes the cached portlet—the portal delivers the cached content rather than running the portlet code to produce the content.

When you create a portlet, you can specify whether or not the portlet should be cached, and if it is cached, for how long. You should cache any portlet that does not provide user-specific content. For example, you would cache a portlet that produces stock quotes, but not one that displays a user e-mail box.

If you develop portlet code, you can and should define caching parameters.

For more information on portlet caching, refer to the BEA AquaLogic User Interaction Development Center ( http://dev2dev.bea.com/aluserinteraction/) or the documentation provided with the portlet software.

Portlet Preferences

You can configure the following types of preferences for portlets.

Table 4-4 Portlet Preferences
Preference Type
Description
Administrative Preferences
E-mail portlet example:
Setting which e-mail server to connect to
These preferences are set by the portlet creator on the Main Settings page of the Portlet Editor. They affect everyone's view of the portlet. Users with administrative rights can edit these preferences from My Pages | Edit Portlet Preferences, or by clicking the edit icon in a portlet's titlebar.
Personal Preferences
E-mail portlet example:
Setting how many e-mails are displayed in the portlet
These preferences are set by the user from
My Page | Edit Portlet Preferences or
My Communities | Edit Portlet Preferences.
These preferences affect that user's view of the portlet.
Community Preferences
E-mail portlet example:
Setting a specific public e-mail folder to display, and a shared login/password for that folder
These preferences are set by the community administrator on the Portlet Preferences page of the Community Editor. This page can include community preferences for portlets specific to that community or for other portlets. Community preferences affect everyone's view of portlets in that community.
When in a community, community administrators can edit these preferences from My Communities | Edit Portlet Preferences, or by clicking the edit icon in a portlet's titlebar.
Portlet Template Preferences
Example:
Which portlet Web service to use
These preferences are set by the portlet template creator on the Main Settings page of the Portlet Template Editor. They affect the portlet template itself and all portlets created from that template.
If you change these preferences after portlets have been created from this template, the change will affect only new portlets. Portlets created from this template before the change was made will not be affected.

Creating Portlet Web Services, Portlet Templates, Portlets, and Portlet Bundles

Creating Portlet Web Services

Portlet Web services allow you to specify functional settings for your portlets in a centralized location, leaving the display settings to be set in each associated portlet.

Intrinsic portlets are installed on the portal.

To create a Web service for an intrinsic portlet:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Web Service - Intrinsic Portlet.
  4. Define the portlet Web service as described in the online help.
  5. Click Finish.

Remote portlets extend the base functionality of the default portal and are hosted on a remote server.

To create a Web service for a remote portlet:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Web Service - Remote Portlet.
  4. Define the portlet Web service as described in the online help.
  5. Click Finish.

Creating Portlet Templates

Portlet templates allow you to create multiple instances of a portlet, each sharing much of the basic configuration but displaying slightly different information. For example, you might want to create a Regional Sales portlet template, from which you could create different portlets for each region to which your company sells. You might even want to include all the Regional Sales portlets on one page for an executive overview.

After you have created a portlet from a portlet template, there is no further relationship between the two objects. If you make changes to the portlet template, these changes are not reflected in the portlets already created with the template.

To create a portlet template:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Portlet Template.
  4. Define your portlet template as described in the online help.
  5. Click Finish.

Creating Portlets

To create a portlet (intrinsic or remote):

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Portlet.
  4. Define your portlet as described in the online help.
  5. Click Finish.

Creating Portlet Bundles

Portlet bundles are groups of related portlets, packaged together for easy inclusion on My Pages or community pages. When users add portlets to their My Pages or community pages, they can add all the portlets in a bundle or select individual portlets from a bundle. You might want to create portlet bundles for portlets that have related functions or for all the portlets that a particular group of users might find useful. This makes it easier for users to find portlets related to their specific needs without having to browse through all the portlets in your portal.

To create a portlet bundle:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Portlet Bundle.
  4. Add portlets to the bundle.
  5. Click Finish.

Requiring and Recommending Portlets

This section describes how to require or recommend portlets to groups or users. It includes the following topics:

Defining Mandatory Portlets

You can force users or groups to include a portlet on their default My Page by making it mandatory for those users or groups. Mandatory portlets display above user-selected portlets. Users cannot remove mandatory portlets from their My Pages.

Because mandatory portlets are added to My Pages, the following portlet types cannot be mandatory: Header, Footer, Content Canvas, and community-only portlets.

To make a portlet mandatory for a particular group:

  1. Click Administration.
  2. Navigate to the portlet you want to make mandatory and click its name.
  3. Click the Security page.
  4. Define mandatory settings for users and groups as described in the online help.
  5. Click Finish.

Recommending Portlets

You can recommend portlets to encourage users to add them to their My Pages. Users can recommend any portlet that can be added to a My Page and to which they have access.

Because recommended portlets are added to My Pages, the following portlet types cannot be recommended: Header, Footer, Content Canvas, and community-only portlets.

To recommend a portlet:

  1. From the Add Portlets page or from within the Portlet Editor, click Content Crawler Flow Chart. This displays text, including a URL, that you can paste into an e-mail and send to users.
  2. E-mail the link or add it to a community links portlet.

Adding Multiple Portlets to Multiple Groups' My Pages

You can add one or more portlets to one or more groups' My Pages as a bulk operation.

To add multiple portlets to multiple groups:

  1. Click Administration.
  2. Navigate to an administrative folder containing portlets, or search for portlets.
  3. Select one or more portlets, and click Content Crawler Flow Chart.
  4. Select the portlets to add to selected groups' My Pages as described in the online help.
  5. Click Finish. Users will be able to remove the portlets you push to them in this way from their My Pages.

 


Managing Communities

This section describes how to set up portal communities and how to enable content managers to create and manage additional communities. It includes the following topics:

About Communities

A community is similar to a My Page in that it displays portlets. However, communities provide content and services to a group rather than to just an individual user.

You might create communities based on departments in your company. For example, the Marketing department might have a community containing press information, leads volumes, a trade show calendar, and so on. The Engineering department could have a separate community containing project milestones, regulatory compliance requirements, and technical specifications.

You might create communities based on projects your company is working on. For example, a member of the Professional Services department working with a customer to deploy a system could create a community where that group could collaborate on deployment issues. You would probably delete this type of community when the project ends.

Each community is based on a community template, which consists of one or more page templates, which can include portlets. Each page template you add to a community (either through the community template or through the community itself) appears as a link at the top of the community.

Individual community pages have their own security settings, so you can use pages, as well as subcommunities, to control access to different areas of the community.

The first page you add becomes the community Home Page—the default page that displays to users when they visit your community.

Communities can also include the following features:

Creating Page Templates and Community Templates

Creating Page Templates

Page templates include portlets and layout settings that are used as the basis to create pages in communities. A single page template can be used by many different communities, allowing you to keep similar types of pages looking analogous. For example, you might want each department to create a community in which the first page lists the general duties of the group, the department members, and the current projects owned by the department.

Each page template specifies a particular page layout. The page layout determines where particular types of portlets can be displayed on the page. For example, if you want to include a Content Canvas portlet on a page, you must choose a page layout that allows you to do so.

There are three possible parts to a page layout, which are combined in different ways in the available page layouts:

The following page layouts are available (the dark gray sections are content canvas areas).

Content Crawler Flow Chart

To create a page template:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Page Template.
  4. Define your page template as described in the online help.
  5. Click Finish.

When you create community pages based on a page template, you have the option to have the pages you are creating inherit any future changes to the template. For example, if you choose to have a community page inherit the template, when you add a portlet on the template, the portlet is added to the associated community pages.

Creating Community Templates

When you create a community, it is based on a community template. Community templates allow you to define the minimum requirements for communities, including page templates and, optionally, a header or footer for the community page. Community creators can add new content and services, but cannot remove the content, services, or design provided by the community template. A single community template can be used by many different communities, allowing you to keep similar types of communities looking similar. For example, you might want all communities based on departments to look similar and contain similar content, while you might want communities based on projects to look different.

You can add Header and Footer portlets to a community in one of two ways:

If you use branding portlets (the Header, Footer, and Content Canvas portlets provided with your portal), community administrators can edit portlet settings such as the text, icon, and color of the header or footer. This allows communities to have similar, but distinct headers and footers.

To create a community template:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Community Template.
  4. Define your community template as described in the online help.
  5. Click Finish.

If you create a community based on a community template, you can choose to have the community you are creating inherit any future changes to the template. If you choose to inherit changes, any change applied to the community template affects the community. For example, if a page template is removed from a community template, the page created from this template will be removed from your community as well.

Creating Communities

You must have Edit privilege to the community and Create Communities activity right to create a community or a subcommunity.

To create a community:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Community.
  4. Define your community as described in the online help.
  5. Click Finish.

Creating Subcommunities

Subcommunities (along with Pages) allow you to create separately-secured subsections in a community, so it can have a more restrictive security than the main community. For example, you might have a Marketing Community that includes an Advertising Subcommunity. This subcommunity might have distinct owners or might be accessible to only a subset of the Marketing Community.

A subcommunity is just a community folder stored in another community folder. Therefore, the subcommunity inherits the security and design of the parent community, but you can then change these settings to suit the needs of the subcommunity. You can also change the relationships of communities and subcommunities just by rearranging the folder structure.

Note: If you choose to display a community Knowledge Directory in the subcommunity, it is separate from the community Knowledge Directory in the parent community.

User community access determines subcommunity access:

You must have Create Communities activity right to create a subcommunity.

To create a subcommunity in a new community:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Community.
  4. Define the pages for your community as described in the online help.
  5. Click the subcommunities page.
  6. Click Create Subcommunity.
  7. Define the Subcommunity as described in the online help.
  8. Click Finish in the Subcommunity Editor.
  9. Click Finish in the Community Editor.
Note: Subcommunities can be nested up to 10 levels deep.
Caution: The Related Communities tab displays peer communities—the communities that are stored in the same administrative folder as your community. For this reason, consider carefully where to store communities and the administrative folder structure necessary to make related communities useful.

Community Pages

Community pages appear as links in a community. You can create a community page in a community folder or in a community editor. Like communities, pages are based on templates from which you can choose whether or not to inherit future changes. Like other portal objects, community pages can be copied (to another community folder), localized, migrated, and can have unique security settings.

To create a community page:

  1. Click Administration.
  2. Go to a community folder.
  3. From the Create Object drop-down list, select Page.
  4. Choose whether or not to Inherit the Template.
  5. If you inherit the page template, you cannot delete portlets associated with the page template, but you can add portlets to the page created from the template. If you do not inherit the page template, you can delete portlets associated with the template, add new portlets, and change the page layout.

  6. Define the page as described in the online help.
  7. Click Finish.

Creating Community Groups

A group is a set of portal users to whom you grant specific access privileges. You can create community groups without affecting portal groups. You create community groups so that you can easily assign responsibilities to community members. For example, you might have a group that is responsible for maintaining schedules in the community. If you later want to make your community group available outside of the community, you can move the group from the community folder to another administrative folder.

You must have the Create Groups activity right to create a community group.

To create a community group:

  1. Click My Communities and select the community you want to edit.
  2. Open the Community Editor by clicking Content Crawler Flow Chart Edit this Community on the right.
  3. On the left, under Edit Community Settings, click This Community's Groups.
  4. Create the Community Group as described in the online help.
  5. To save the group, click Finish and complete the Save Object dialog box.

Creating Community Portlets

You can create and manage portlets in the community. You need access to portlet Web services or portlet templates and must have Create Portlets activity right to create portlets.

Portlets created in the Community Editor are only available within the community. If you later want to make portlets available outside of the community, you can move the portlet from the community folder to a higher level administrative folder.

Note: Removing community portlets from the community deletes them from the portal.

To create portlets available only to this community:

  1. Click My Communities and select the community you want to edit.
  2. Open the Community Editor by clicking Content Crawler Flow Chart Edit this Community on the right.
  3. On the left, under Edit Community Settings, click This Community's Portlets.
  4. Create Community portlets as described in the online help.
  5. After creating your portlet, click Finish.

To display these portlets to community users, you must add these portlets to the appropriate community page.

Managing Community Users and Groups

Community membership controls the community selection in the My Community section of the portal. It also controls the mandatory tabs in the community navigation. You can control who can join, edit, and administer the community.

Users must have Select rights to join the community.

To change the access rights of each member of the community:

  1. Click Administration.
  2. Open the administration folder that contains the community.
  3. Click the community.
  4. Click Content Crawler Flow ChartContent Crawler Flow Chart.
  5. Under Edit Standard Settings in the Community Editor, click Security.
  6. Configure the ACL as described in the online help.
  7. Click Finish.

Requiring Communities for Groups

You can make a community mandatory for the members of one or more groups. Users cannot remove themselves from mandatory communities. You can also display tabs for mandatory communities in the banner at the top of the portal, alongside the My Pages and My Communities tabs.

To make a community mandatory for a particular group:

  1. Click Administration.
  2. Open the administration folder that contains the community.
  3. Click the community.
  4. Click Content Crawler Flow ChartContent Crawler Flow Chart.
  5. Under Edit Standard Settings in the Community Editor, click Security.
  6. Define mandatory settings for users and groups.
  7. Click Finish.

Recommending Communities

You can recommend communities to encourage users to join them. Users can recommend any community to which they have access.

To recommend a community:

  1. From the Join Communities page or from within the Community Editor, click Content Crawler Flow Chart to display text, including a URL, that you can paste into an e-mail and send to users.
  2. E-mail the link or add it to a community links portlet.

Subscribing Multiple Groups to Multiple Communities

You can subscribe one or more groups to one or more communities as a bulk operation.

To add multiple communities to multiple groups:

  1. Click Administration.
  2. Navigate to an administrative folder containing communities, or search for communities.
  3. Select one or more communities, and click Content Crawler Flow Chart.
  4. Select the communities to add to selected groups as described in the online help.
  5. Click Finish. Users will be able to unsubscribe from the communities you push to them in this way.

Managing the Community Knowledge Directory

The community Knowledge Directory is an optional part of a community that allows you to provide access to additional community-specific content through a folder hierarchy. There are two folders that are always present in a community Knowledge Directory:

You can also create your own folders and fill them with links to Web sites, user profiles of community experts, documents from the portal Knowledge Directory, and pages in other communities. Users can browse these links from the community Knowledge Directory, or you can display the links in a Community Links portlet.

Note: You might want to create a Community Links portlet that includes links to important secondary community pages and then invite users to add the portlet to their My Pages. This provides direct access to those community pages; users do not have to navigate to the community home page and then click the community page they want.

To create community Knowledge Directory folders and or to create a community Links portlet:

  1. Click My Communities and then the community you want to edit.
  2. Click Community Members and Knowledge Directory to the right of the community page links.
  3. Click Content Crawler Flow ChartEdit (not Edit This Community).
  4. Click Content Crawler Flow ChartContent Crawler Flow Chart.
  5. Type a name and description for the folder and click OK.
  6. Open the folder by clicking its name.
  7. Add links to the folder:
    • To add links to Web sites, click Content Crawler Flow ChartContent Crawler Flow Chart Add Links and complete the settings as described in the online help.
    • To add links to specific user profiles, click Content Crawler Flow ChartContent Crawler Flow Chart Add Experts and complete the settings as described in the online help.
    • To add links to documents in the portal Knowledge Directory, click Content Crawler Flow ChartContent Crawler Flow Chart Add Documents and complete the settings as described in the online help.
    • To add links to pages in another community, click Content Crawler Flow ChartContent Crawler Flow Chart Add Pages and complete the settings as described in the online help.
  8. If you want to display the links in this folder in a portlet in your community, click Content Crawler Flow ChartContent Snapshot and specify the community page to which you want to add the portlet.

 


Enabling Document Discovery with Content Crawlers and Content Services

This section describes how to crawl WWW locations, file system locations, and back-end content and mail servers to make documents in these repositories available through portal links. This section includes the following topics:

For a summary of AquaLogic Interaction content crawlers, as well as guidelines on best practices for deploying content crawlers, see the Deployment Guide for BEA AquaLogic User Interaction G6.

For information on installing and configuring AquaLogic Interaction remote content crawlers, follow the product documentation included with your software instead of the documentation in this guide.

About Content Crawlers

Content crawlers import, from back-end content sources, document records that contain descriptive information, such as content type and properties, document ACL (read access only), and links to these documents into Knowledge Directory subfolders according to property-based filters, as shown in the following figure.

Figure 4-1 Content Crawler Flow Chart

Content Crawler Flow Chart

There might be cases where imported content does not pass the filters on any folder, even the destination folder. In these cases you can either choose to not import the rejected content, or to place the rejected content into the Unclassified Documents folder. If you place the rejected content into the Unclassified Documents folder, you can view this content in the Knowledge Directory edit mode. You can later move these document records into the Knowledge Directory.

The following table summarizes the metadata AquaLogic Interaction content crawlers can import.

Table 4-5 Types of Metadata that can be Imported by AquaLogic Interaction Content Crawlers
Content Crawler
Import Links to Documents
Import Document Security
Import Folder Security
Web Content Crawler
Yes
No
No
Remote Windows Content Crawler
Yes
Yes (Windows)
Yes (Windows)
Remote Exchange Content Crawler (Windows)
Yes
No
No
Remote Lotus Notes Content Crawler (Windows)
Yes
Yes
No
Remote Documentum Content Crawler
Yes
Yes
Yes

Content crawlers also index the full document text, and this index is used by the Search Service to make documents available through the Search tool.

Developing Content Services to Target Specific Content

To facilitate maintenance, we recommend you implement several instances of each content crawler type, configured for limited, specific purposes.

For file system content crawlers, you might want to implement a content crawler that mirrors an entire file system folder hierarchy by specifying a top-level starting point and its subfolders. Although the content in your folder structure is available on your network, replicating this structure in the portal offers several advantages:

However, you might find it easier to maintain controlled access, document updates, or document expiration by creating several content crawlers that target specific folders.

If you plan to crawl WWW locations, familiarize yourself with the pages you want to import. Often, you can find one or two pages that contain links to everything of interest. For example, most companies offer a list of links to their latest press releases, and most Web magazines offer a list of links to their latest articles. When you configure your content crawler for this source, you can target these pages and exclude others to improve the efficiency of your crawl jobs.

If you know that certain content will no longer be relevant after a date—for example, if the content is related to a fiscal year, a project complete date, or the like—you might want to create a content crawler specifically for the date-dependent content. When the content is no longer relevant, you can run a job that removes all content created by the specific content crawler.

For remote content crawlers, you might want to limit the target for mail content crawlers to specific user names; you might want to limit the target for document content crawlers to specific content types.

For additional considerations and best practices, see the Deployment Guide for BEA AquaLogic User Interaction G6.

Configuring a Content Crawler

Content services allow you to specify general settings for your remote content repository, leaving the target and security settings to be set in the associated remote content crawler. This allows you to crawl multiple locations in the same content repository without having to repeatedly specify all the settings.

If you plan to use an AquaLogic Interaction content Web service (AquaLogic Interaction Content Services) to crawl document repositories, follow the product documentation provided with that software instead of the procedures in this guide. AquaLogic Interaction remote content crawlers include a migration package that enables you to import pre-configured remote server and Web service objects.

The following table describes the steps you take to configure a target-specific content service.

Table 4-6 Configuring a Target-Specific Content Service
Basic Step
Procedure
Create a remote server.
  1. Click Administration.
  2. Navigate to or create the administrative folder for content service objects.
  3. In the Create Object drop-down list, select Remote Server.
  4. Configure connection information for the remote server as described in the online help.
  5. Click Finish.
Create a content service.
  1. Click Administration.
  2. Navigate to the administrative folder for the group of content service objects you are configuring.
  3. In the Create Object drop-down list, select Web Service - Content.
  4. Configure connection information for the Web service as described in the online help.
  5. Click Finish.
Configure a content source.
  1. Click Administration.
  2. Navigate to the administrative folder for the group of content service objects you are configuring.
  3. In the Create Object drop-down list, click Content Source - WWW for a Web site or Content Source - Remote for a back-end content repository.
  4. Configure connection information for the Web service as described in the online help.
  5. Click Finish.
Configure a document property map.
If you need to define a new content type and properties for your content, follow the procedures in Configuring Content Types and Document Properties.
Configure a content crawler and crawl job. When you configure a content crawler, you specify:
  • location of source documents
  • content types that determine how source document properties are mapped to portal properties
  • document security settings
  • document sorting, refresh, and purge settings
  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Content Crawler - WWW for a Web site or Content Crawler - Remote for a back-end content repository.
  4. Define your Web content crawler as described in the online help.
  5. On the Set Job page, add this operation to a Job and schedule the Job to run.
  6. Click Finish.
Note: For information on how to organize crawled content in folders, see Using Filters to Organize Crawled Content.
To import security, the domain and group information for the source being crawled must be mapped to an authentication source prefix in the global ACL sync map. If you run a content crawler and find that some or all of the security has not been imported, map the domain in the global ACL sync map and run the content crawler again.
  1. Click Administration.
  2. In the Select Utility drop-down list, choose Global ACL Sync Map.
  3. Add the domain prefix and group mappings for the content source as described in the online help.
  4. Click Finish.

Testing Content Crawlers

Before you have a content crawler import content into the public folders of your portal, test it by running a job that crawls document records into a temporary folder.

When you create the test folder, remove the Everyone group, and any other public groups, from the Security page on the folder to ensure that users cannot access the test content.

The following table provides a summary test plan for your content crawlers.

Table 4-7 A Test Plan for Content Crawlers
Test Objective
Steps
Make sure the content crawler creates the correct links.
Examine the target folder and ensure the content crawler has generated records and links for desired content and has not created unwanted records and links.
If you iterate this testing step after modifying the content crawler configuration, make sure you delete the contents of the test folder and clear the deletion history for the content crawler as described in Clearing the Deletion History.
Make sure the content crawler creates correct metadata.
Make sure that all documents are given the right content types, and that these content types correctly map properties to source document attributes.
Go to the Knowledge Directory, and look at the properties and content types of a few of the documents this content crawler imported to see if they are the properties and content types you expected.
To view the properties and content type for a document:
  1. Click Directory and navigate to the folder that contains the document whose properties and content type you want to view.
  2. Click Properties under the document to display the information about the document. The properties are displayed in a table along with their values. The content type is displayed at the bottom of the page.
If you iterate this testing step after modifying the content crawler configuration, make sure you configure the content crawler to refresh these links. For information on refreshing links, see Keeping Document Records Up-to-Date.
Test properties, filters, and search.
To test that document properties have been configured to enable filters and search, browse to the test folder, and perform a search using the same expression used by the filter you are testing. Either cut and paste the text from the filter into the portal search box or use the Advanced Search tool to enter expressions involving properties. Select the Search Only in this Folder option. The links that are returned by your search are for the documents that will pass your filter.

Maintaining Content Imported by Content Crawlers

This section describes how to maintain document records imported by content crawlers. It includes the following topics:

Keeping Document Records Up-to-Date

The Document Refresh Agent is an intrinsic job that updates the records in the Knowledge Directory. The Document Refresh Agent examines every link in your portal. For each link, the document refresh agent first determines if the link requires refreshing based on the document record setting set when the file was uploaded or by the content crawler that created the link.

To administer refresh settings for the content crawler:

  1. Click Administration.
  2. Navigate to the content crawler whose document records you want to refresh.
  3. On the Document Settings page and Advanced Settings page, configure document refresh attributes.
  4. Click Finish.

Clearing the Deletion History

Content crawlers keep a history of actions performed on crawled document records, including the deletion history. If you delete records, the content crawler remembers that the content was imported and deleted and it will not attempt to re-import this content. If you later decide to import records for that content, you must clear the deletion history.

To clear the deletion history:

  1. Click Administration.
  2. Navigate to the content crawler and click its name.
  3. On the Advanced Settings page, click Clear Deletion History.
  4. Click Finish.

Removing Document Records

By carefully targeting your content crawler to generate content on only one topic, you allow for the easy removal of a topic that becomes irrelevant, without disturbing unrelated content.

To remove all the content ever imported by a particular content crawler:

  1. Click Administration.
  2. Navigate to the content crawler whose imported links you want to remove and click its name. This launches the Content Crawler Editor.
  3. On the Document Settings page, set the generated content to be deleted immediately, and select Apply these settings to existing documents created by this content crawler.
  4. Click Finish.
  5. The next time the Document Refresh Agent runs, it will delete all of the records created by this content crawler.

 


Working with Search

This section describes how to implement search for documents that reside in the Knowledge Directory, in communities, or in the collection of crawled links. It includes the following topics:

Customizing Search Service Behavior

This section describes how to customize portal search. It includes the following topics:

For information on default behavior for search syntax and results ranking, see Default Behavior of Search Service.

Configuring Best Bets and Top Best Bets

You configure best bets with the Search Results Manager. Best bets associate specific search phrases you specify with a set of search results, in rank order. In addition, users can go directly to the highest ranking result, the top best bet, instead of seeing the normal search results.

When end-users enter a banner search query that matches a best bet search phrase, the best bet results appear as the first results in the relevance-ranked result list. The phrase "Best Bet" appears next to each best bet result to inform the user that the result has been judged especially relevant to his or her query.

Best bets apply only to the portal banner search box and search portlet. Best bets are not used by other portal search interfaces, such as advanced search and object selection search.

Note: Best bets are case-insensitive.

To create a best bet:

  1. Click Administration.
  2. From the Select Utility drop-down list, select Search Results Manager.
  3. Launch the Best Bet Editor by clicking New Best Bet.
  4. Complete the best bet settings as described in the online help.
  5. Click Finish to save your best bet settings.
  6. Click Finish in the Search Results Manager.

You can create hundreds of best bets, each mapping to a maximum of 20 results.

Since best bets are handled by the Search Service and are not managed portal objects, best bets do not migrate from development to production environments; you must re-create them in the production environment.

Working With Top Best Bets

The highest ranking best bet result for a given search term is the top best bet. If best bets are set for a term, instead of seeing search results, users can go directly to the top best bet result (an object such as a community or document) by doing one of the following:

If there are no best bets set for the term the user entered, the search results for the term are displayed instead.

If an object is a top best bet for any search terms, those terms are listed on the Properties and Names page of the object's editor.

Modifying the Properties Searched and the Relevance Weight for Properties

When a user enters a query into a search box in the portal, the portal searches the properties specified on the Banner Fields page of the Search Results Manager. The default banner field properties are Name, Description, and Full-Text Content. However, you can also add other properties, such as Keyword, Department, or Author, to further refine the search results.

Another way of controlling the search results is by modifying the relevance weight for banner field properties. Overweighting a property increases its relevancy ranking; and underweighting it decreases it. For example, you can manipulate the search to first return documents whose content matches the search string (by overweighting the Full-Text Content property) followed by documents whose name matches the search string (by underweighting the Name property). When users type widgets, documents with widgets in the content appear first in a relevance-ranked search result; they are followed by documents or files with widgets in their names.

Banner field settings apply to the banner search box, advanced search, object selection search, or any other portal search interfaces.

To configure the weights of existing banner fields:

  1. Click Administration.
  2. From the Select Utility drop-down list, select Search Results Manager.
  3. Under Edit Utility Settings, click Banner Fields.
  4. Complete the Banner Field settings as described in the online help.
  5. Click Finish.

To add new banner fields:

  1. Click Administration.
  2. From the Select Utility drop-down list, select Search Results Manager.
  3. Under Edit Utility Settings, click Banner Fields.
  4. Click Add Field.
  5. From the drop-down list that appears, select the banner field that you want to add.
  6. Complete the Banner Field settings as described in the online help.
  7. Click Finish.

Since banner fields and relevance weights are a Search Service setting and not managed portal objects, the settings do not migrate from development to production environments; you must re-create them in the production environment.

Enabling Spell Correction

Automatic spell correction is applied to the individual terms in a basic search when the terms are not recognized by the Search Service. Spell correction is not applied to quoted phrases.

For example, if a user queries for portel server but the term portel is unknown to the Search Service, items matching the terms portal and server would be returned instead. The same applies to Internet style mode and query operators mode. So, for instance, a search for portel <NEAR> server would return documents containing the terms portal and server in close proximity, but only if there are no matches for portel and server in close proximity.

Automatic spell correction is enabled by default. You can disable it from the Search Results Manager in the administrative portal user interface.

To disable the automatic spell correction:

  1. Click Administration.
  2. From the Select Utility drop-down list, select Search Results Manager.
  3. Under Edit Utility Settings, click Thesaurus and Spell Correction.
  4. Clear the Apply Spell Correction check box.
  5. Click Finish.

Implementing a Search Thesaurus

The Search Service allows you to create a thesaurus (or synonym list), load it into the server, and enable thesaurus expansion for all user queries. Thesaurus expansion allows a term or phrase in a user's search to be replaced with a set of custom related terms before the actual search is performed. This feature improves search quality by handling unique, obscure, or industry-specific terminology.

For example, with conventional keyword matching, a search for the term gadgets might not return documents that discuss portlets or Web services. But, by creating a thesaurus entry for gadgets, it is possible to avoid giving users zero search results because of differences in word usage. The entries allow related terms or phrases to be weighted for different contributions to the relevance ranking of search results. For example, gadgets is not really a synonym for Web services, so a document that actually contains gadgets should rank higher than one that contains Web services.

The entries are lower-case, comma-delimited lists of the form:

gadgets,portlets,web services[0.5]

In this example, the number [0.5] corresponds to a non-default weighting for the phrase web services.

Note: Thesaurus entries must be lower-case.

Thesaurus entries can be created to link closely related terms or phrases, specialized terminology, obsolete terminology, abbreviations and acronyms, or common misspellings. The expansion works by simply replacing the first term in an entry with an OR query consisting of all the terms or phrases in the entry. The weights are then taken into consideration when matching search results are ranked.

The thesaurus expansion feature is best used for focused, industry- or domain-specific examples. It is not intended to cover general semantic relationships between words or across languages, as with a conventional paper thesaurus. Although the Search Service thesaurus expansion can definitely improve search quality, adding entries for very general or standard terms can actually degrade search quality if it leads to too many search result matches.

Enabling the Thesaurus

To enable the thesaurus:

  1. Click Administration.
  2. From the Select Utility drop-down list, select Search Results Manager.
  3. Under Edit Utility Settings, click Thesaurus and Spell Correction.
  4. Select Use the Thesaurus.
  5. Click Finish.

After you enable this feature, you must create the synonym list in the database, described next.

Setting Up the Synonym List for the Thesaurus

To set up the search thesaurus:

  1. Create a comma-delimited, UTF-8 file containing the desired thesaurus entries.
  2. Note: Thesaurus entries must be in lower-case.

    The thesaurus is a comma-delimited file, also known as a CDF. Each line in the file represents a single thesaurus entry. The first comma-delimited element on a line is the name of the thesaurus entry. The remaining elements on that line are the search tokens that should be treated as synonyms for the thesaurus entry. Each synonym can be assigned a weight that determines the amount each match contributes to the overall query score. For example, a file that contains the following two lines defines thesaurus entries for couch and dog:

    couch,sofa[0.9],divan[0.5],davenport[0.4]
    dog,canine,doggy[0.85],pup[0.7],mutt[0.3]

    Searches for couch generate results with text matching terms couch, sofa, divan, and davenport. Searches for dog generate results that have text matching terms dog, canine, doggy, pup, and mutt. In the example shown, the term dog has the same contribution to the relevance score of a matching item as the term canine. This is equivalent to a default synonym weighting of 1.0. In contrast, the presence of the term pup contributes less to the relevance score than the presence of the term dog, by a factor of 0.7 (70%).

    The example thesaurus entries constitute a complete comma-delimited file. No other information is needed at the beginning or the end of the file.

    Entries can also contain spaces. For example, a file that contains the following text creates a thesaurus entry for New York City:

    new york city,big apple[0.9],gotham[0.5]

    Searches for the phrase "new york city" will return results that also include results containing "big apple" and "gotham." Thesaurus expansion for phrase entries only occurs for searches on the complete phrase, not the individual words that constitute the phrase. Similarly, the synonym entries are treated as phrases and not as individual terms. So while a search for "new york city" returns items containing "big apple" and "gotham," a search for new (or for york, or for city, or for "new york") will not. Conversely, an item that contains big or apple but not the phrase "big apple" will not be returned by a search for "new york city."

    Comma-delimited files support all UTF8-encoded characters; they are not limited to ASCII. However, punctuation should not be included. For example, if you want to make ne'er-do-well a synonym of wastrel, replace the punctuation with whitespace:

    wastrel,ne er do well[0.7]

    This matches documents that contain ne'er-do-well, ne er do well or some combination of these punctuations and spaces (such as ne'er do well). If you want your synonym to match documents that contain neer-do-well, which does not separate the initial ne and er with an apostrophe, you must include a separate synonym for that, such as:

    wastrel,ne er do well[0.7],neer do well[0.7]

    Finally, comment lines can be specified by beginning the line with a "#":

    # furniture entries 
    couch,sofa[0.9],divan[0.5],davenport[0.4]
    #chair,stool[5.0]
    # animal entries
    dog,canine[0.9],doggy[0.85],pup[0.7],mutt[0.3]

    In this example, the Search Service parses two thesaurus entries: couch and dog. There will be no entry for chair.

    These examples are of entries that contain only ASCII characters. This utility supports non-ASCII characters as well, as long as they are UTF8-encoded.

    Note: Some editors, especially when encoding UTF-8, insert a byte order mark at the beginning of the file. Files with byte order marks are not supported, so remove the byte order mark before running the customize utility.

    A CDF thesaurus file can have at most 50,000 distinct entries (lines). Each entry can have at most 50 comma-delimited elements (including the name of the entry). If either of these limits are exceeded, the customize utility will exit with an appropriate error message.

  3. Stop the Search Service.
  4. The comma-delimited file is converted to a binary format in the next step. The conversion removes and replaces certain files used by the Search Service, and this removal and replacement cannot be done while the Search Service is running.

  5. At a command prompt, run the customize utility.
  6. The customize utility can be found in the bin\native directory of the Search Service installation, for example, C:\bea\alui\ptsearchserver\6.1\bin\native\customize.exe. The utility must be run from a command prompt, taking command-line arguments for the thesaurus CDF file and the path to the Search Service installation:

    customize -r <thesaurus file> <SEARCH_HOME>

    where SEARCH_HOME is the root directory of the Search Service installation, for example, C:\bea\alui\ptsearchserver\6.1. This is not an environment variable that needs to be set; the directory merely needs to be specified directly on the command line. For example, if your thesaurus file is located in \temp, you enter:

    customize -r \temp\thesaurus.cdf C:\bea\alui\ptsearchserver\6.1

    When you run the customize utility, the files in SEARCH_HOME\common are removed and replaced by files of the same name, though their contents now represent the mappings created by the customize utility. The customize utility has a command-line mode for reverting to the set of mappings files that shipped with the Search Service (and hence removing any thesaurus customizations). This mode uses the -default flag in place of -r <thesaurusfile>, but otherwise is identical to the invocations shown above:

    customize -default C:\bea\alui\ptsearchserver\6.1
  7. Restart the Search Service.
  8. The files produced by the customize utility are loaded when the Search Service starts.

Customizing Categorization of Search Results

Users can use the Sort By drop-down list on the search results page to sort results by object type or by folder location in the Knowledge Directory or Administrative Object Directory. You can customize this drop-down list to include additional categories relevant for your users. If you use a property in your portal documents named Region, for example, you can customize the Sort By drop-down list to include Sort By Region: New England, Midwest, and so forth.

The first issue to consider when assessing whether categorizing search results by a particular property is a good idea is whether the property will be defined for a substantial percentage of all search results. For instance, if 90% of search results do not have the property defined, then when categorizing by that property, most everything will fall under "All Others", and the categorization will not be very useful. For that reason, as a rule of thumb it is not generally recommended to add a custom categorization option for a property which is undefined for more than half of all documents and administrative objects.

The other issue to consider is whether the values for the property will make reasonable category titles. In order for categorization to work well for a property, each value should be a single word or a short noun phrase, for example, New England, Midwest, Product Management, Food and Drug Administration, and so forth. The values should not be full sentences or long lists of keywords, for example, "This content crawler crawls the New York Times finance section". The entire contents of the property value for each item will be considered as a single unit for the purposes of categorization, so it will look odd if a full sentence is returned as a category title.

Setting Up Property Data for Categorization

The first step in the process of adding a new categorization option is to ensure that documents and objects include the property you want to use to sort by category. For information on setting up maps from source document attributes to portal properties, see Configuring Content Types and Document Properties. Ensure that the property that defines the category for sorting has the following configuration:

Enabling Results Sorting

To enable results sorting by property, add the following settings within the <Search> section of portalconfig.xml:

<CategoryName_1 value="CategoryName"/>
<CategoryField_1 value="PTObjectID"/>

CategoryName is the name you want to appear in the Sort By drop-down list, for example, Region.

ObjectID is the integer that identifies the property object. To find the object ID, right-click the link to the property object and then choose Properties. This will yield a link that looks something like this:

http://portal.company.com/portal/server.pt?open=36&objID=200&parentname=ObjMgr&parentid=5&mode=1&in_hi_userid=1&cached=true

The objId argument is the one containing the integer you want. In this link, the object ID is 200, so complete the CategoryField entry as follows:

<CategoryField_1 value="PT200"/> 

You can add multiple custom categorization options by adding analogous tags named CategoryName_2, CategoryField_2, CategoryName_3, CategoryField_3, and so forth. In portalconfig.xml, the Category tags must be numbered consecutively without skipping. For example, if there is a <CategoryName_3> tag, there must be tags for Category 1 and 2.

For more information about the portalconfig.xml file, see Configuring Advanced Properties and Logging.

Managing Grid Search

Grid search consists of shared files (for example, C:\cluster) and search nodes. When you start up the Search Service, it looks at the cluster.nodes file in the shared files location to determine the host, port, and partition of each node in the cluster. It monitors and communicates the availability of the search nodes and distributes queries appropriately.

The Search Service also automatically repairs and reconciles search nodes that are out of sync with the cluster. At startup, nodes will check their local TID against the current cluster checkpoint and index queues. If the current node is out-of-date with respect to the rest of the cluster, it must recover to a sufficiently current transaction level (at or past the lowest cluster node TID) before servicing requests for the cluster. Depending upon how far behind the local TID is, this operation may require retrieval of the last-known-good checkpoint data in addition to replaying queued index requests.

Although the Search Service performs many actions automatically to keep your cluster running properly, there are some maintenance and management tasks you perform manually to ensure quality search in your portal. This section includes the following topics:

Updating the Search Collection

As users create, delete, and change objects in the portal, the search index gets updated. In some cases, the portal updates the search index immediately; in other cases, the search is not updated until the next time the Search Update Agent runs. The following table describes the cases in which the search index is updated immediately (I) or updated by the Search Update Agent (SU).

Table 4-8 How the Search Index is Updated
Object
Create
Delete
Move
Change Name or Description
Change Other Properties
Document
I
SU
SU
I
I
Directory Folder
I
SU
SU
I
SU
Administrative Folder
I
I
I
I
I
Administrative Object
I
I
I
I
I

Note: If the Knowledge Directory preferences are set to use the search index to display browse mode, changes will not display until the Search Update Agent runs. The Knowledge Directory edit mode and the Administrative Object Directory display objects according to the database, and therefore show changes immediately.

The Search Update job is located in the Intrinsic Operations administrative folder. It performs the following actions on the search index:

The default frequency of the Search Update job is one hour, which is suitable for most portal deployments; but, if your search index is very large, the Search Update Agent might not be able to finish in one hour. For information on modifying Search Update job settings, see Running Portal Agents.

Repairing Your Search Index

Your search index might get out of sync with your database if, during the course of a crawl, the Search Service became unavailable or a network failure prevented an indexing operation from completing. Another possibility is that a Search Service with empty indices was swapped into an existing portal with pre-existing documents and folders.

The Search Service Manager lets you specify when and how often the Search Update Agent repairs your search index. Instead of just synchronizing only particular objects, the repair also synchronizes all objects in the database with the search index. Searchable objects in the database are compared with IDs in the search index. If an object ID in the database is not in the search index, the Search Update Agent attempts to re-index the object; if an ID in the search index is not in the database, the Search Update Agent removes the object from the search index.

Run the Search Update Agent for purposes of background maintenance or complete repopulation of the search index.

To configure Search Repair:

  1. Click Administration.
  2. In the Select Utility drop-down list, click Search Service Manager.
  3. Under the Search Repair Settings, specify when your search index should repair itself and the interval of the next repair sessions.
  4. Click Finish.

Managing Checkpoints and Restoring Your Search Collection

A checkpoint is a snapshot of your search cluster that is stored in the cluster folder (for example, C:\bea\alui\cluster), a shared repository available to all nodes in the cluster. When initializing a new cluster node, or recovering from a catastrophic node failure, the last known good checkpoint will provide the initial index data for the node's partition and any transaction data added since the checkpoint was written will be replayed to bring the node up to date with the rest of the cluster.

You manage checkpoints on the Checkpoint Manager page of the Search Cluster Manager. You can perform the following actions with the Checkpoint Manager:

Note: For instructions on using the Search Cluster Manager, refer to online help.

Since checkpoint data is of significant size, limit the number of checkpoints maintained by the system. Specify how many checkpoints to keep on the Settings page of the Search Cluster Manager. Refer to online help for details.

Managing Search Cluster Topology

Your search cluster is made up of one or more partitions, each of which is made up of one or more nodes. As your search collection becomes larger, the collection can be partitioned into smaller pieces to facilitate more efficient access to the data. As the Search Service becomes more heavily utilized, replicas of the existing partitions, in the form of additional nodes, can be used to distribute the load. Additional nodes also provide fault-tolerance; if a node becomes unavailable, queries are automatically issued against the remaining nodes.

Note: If a partition becomes unavailable, the cluster will continue to provide results; however, the results will be incomplete (and thus indicated in the query response).

You manage the partitions and nodes in your search cluster on the Topology Manager page of the Search Cluster Manager. You can perform the following actions with the Topology Manager:

Monitoring Search Activity with Logs

Search logs are kept for the search cluster as well as for each node in the search cluster. The cluster logs are stored in the \cluster\log folder, for example, C:\bea\alui\cluster\log\cluster.log. The cluster logs include cluster-wide state changes (such as cluster initialization, node failures, and node recoveries), errors, and warnings.

The node logs are stored in the node's logs folder, for example, C:\bea\alui\ptsearchserver\6.1\node1\logs. There are two kinds of node logs: event logs and trace logs. Event logs capture major node-local state changes, errors, warnings, and events. Trace logs capture more detailed tracing and debugging information.

There are several ways to view the logs:

A new cluster log is created with each new checkpoint. The log that stores all activity since the last checkpoint is called cluster.log. When a new checkpoint is created, the cluster.log file is saved with the name <checkpoint>.log, for example, 0_1_5116.log.

Using the Command Line Admin Utility

The Command Line Admin Utility allows you to perform the same functions you can perform in the Search Cluster Manager as well as the following additional functions:

The Command Line Admin Utility is located in bin\native folder in the Search Service installation folder, for example, C:\bea\alui\ptsearchserver\6.1\bin\native\cadmin.exe. Invoking the command with no arguments displays a summary of the available options:

% $RFHOME/bin/cadmin
Usage: cadmin <command> [command-args-and-options] [--cluster-home <CLUSTER_HOME>]
Requesting Cluster Status

The status command displays the status of the cluster. By default, the status command displays a terse, one-line summary of the current state of the cluster:

% cadmin status --cluster-home=/shared/search
2005-04-22 13:54:13 checkpoint_xxx 0/1/198 0/1/230 impaired

If you add the verbose flag, the status command displays the full set of information, including the status of every node in the cluster:

% cadmin status --verbose --cluster-home=/shared/search
2005-04-22 13:54:13 /shared/search checkpoint_xxx
cluster-state: impaired
cluster-tid: 0/1/198 0/1/230
partition-states: complete impaired
node p0n0: 0 192.168.1.1 15244 0/1/198 0/1/460 run
node p0n1: 0 192.168.1.2 15244 0/1/198 0/1/460 run
node p1n0: 1 192.168.1.3 15244 0/1/198 0/1/230 run
node p1n1: 1 192.168.1.4 15244 0/1/100 0/1/120 offline

You can also use the status command to repeatedly emit status requests at a specified interval:

% cadmin status --period=10 --count=5
2005-04-22 13:54:13 checkpoint_xxx 0/1/198 0/1/230 impaired
2005-04-22 13:54:23 checkpoint_xxx 0/1/198 0/1/230 impaired
2005-04-22 13:54:33 checkpoint_xxx 0/1/198 0/1/230 impaired
2005-04-22 13:54:43 checkpoint_xxx 0/1/198 0/1/230 impaired
2005-04-22 13:54:53 checkpoint_xxx 0/1/400 0/1/428 complete
Requesting Specific Cluster Status Information

You can request information about specific nodes within the cluster. This displays the same type of information that is displayed as part of the verbose cluster status request:

% cadmin nodestatus p0n0 p1n0
node p0n0: 0 192.168.1.1 15244 0/1/198 0/1/460 run
node p1n0: 1 192.168.1.3 15244 0/1/198 0/1/230 run

As with cluster status, you can request periodic status output:

% cadmin nodestatus p0n0 p1n0 --period=10
2005-04-22 13:54:13 p0n0 0 192.168.1.1 15244 0/1/198 0/1/460 run
2005-04-22 13:54:13 p1n0 0 192.168.1.1 15244 0/1/198 0/1/460 run
2005-04-22 13:54:23 p0n0 0 192.168.1.1 15244 0/1/198 0/1/460 run
2005-04-22 13:54:23 p1n0 0 192.168.1.1 15244 0/1/198 0/1/460 run
Changing the Run Level of the Cluster

You can modify the run level of the cluster, or of individual nodes within the cluster. For example, you might want to place nodes in standby mode prior to changing cluster topology or shutting them down. Transitioning from standby to any of the operational modes (recover, readonly, stall, run) will validate the node's state against the cluster state and will trigger a checkpoint restore if one is warranted.

Transitions to readonly or offline modes are also potentially useful: readonly mode halts incorporation of new index data on a node; offline mode will cause the search server to exit.

To set run level of p0n0 and p1n0 to standby:

% cadmin runlevel standby p0n0 p1n0

To set run level of the entire cluster to run (affects only non-offline nodes):

% cadmin runlevel run
Purging and Resetting the Search Collection

You can purge the contents of the search collection. You might want to purge the cluster in staging or development systems, or if you want to clean out the search collection without re-installing all the nodes. Purging the search collection may also be useful in a dire situation where the contents of the cluster are corrupted beyond repair and good checkpoints are not available for recovery.

By default, the checkpoints and index queue are left in place. This allows you to rebuild the local index on a node whose archive appears to be corrupted.

To purge the search collection, but keep checkpoints:

% cadmin purge
Caution: As a safeguard against performing this operation by accident, all cluster nodes must be in standby mode and you must confirm the action before the purge command is sent out.

The purge command causes a node to generate empty archive collections (document, spell, and mappings) and perform a soft-restart to load them into memory. Before reloading, the admin utility updates the checkpoint files in the shared repository to prevent the nodes from automatically reloading from an existing checkpoint.

To purge the search collection and delete existing checkpoints:

% cadmin purge --remove-checkpoints
Initiating a Cluster Checkpoint

You can request a cluster checkpoint at any time (in addition to any periodic checkpoints initiated by the cluster):

% cadmin checkpoint

Since creating a checkpoint is a time-consuming process, the admin utility displays its progress:

Checkpoint using nodes: p0n0 p1n1 p2n0
Node p0n0 copying data 
Node p1n1 copying data
Node p2n0 copying data
0%..10%..20%..30%..40%..50%..60%..70%..80%..90%..100%
Checkpoint complete in \\cluster_home\checkpoint_xxx

If the cluster has insufficient active nodes to perform the checkpoint, the admin utility displays appropriate feedback:

Node p0n0 is offline
Node p0n1 is offline
Unable to checkpoint at this time: partition 0 is unavailable

Any error messages encountered during the checkpoint process also display:

Checkpoint using nodes: p0n0 p1n1 p2n0
Node p0n0 copying data 
Node p1n1 copying data
Node p2n0 copying data
0%..10%..20%..
Node p1n1 is offline
Checkpoint aborted
Reloading from a Checkpoint

You can request a checkpoint restore at any time.:

% cadmin restore

Since restoring from a checkpoint is a time-consuming process, the admin utility displays its progress:

Restoring cluster from \\cluster_home\checkpoint_xxx
Node p0n0 retrieving data
Node p0n1 retrieving data
0%..10%..20%..30%..40%..50%..60%..70%..80%..90%..100%
Node p0n0 restarted
Node p0n1 restarted
Restoration complete
Changing Cluster Topology

You use the same command to add or remove nodes from the search cluster as you do to repartition the cluster:

% cadmin topology new.nodes

The difference is how you change the cluster.nodes file:

Since changing cluster topology can be a time-consuming process, the admin utility displays its progress. Here's an example of what the output might be when you add and remove nodes:

Current topology:
<contents of current cluster.nodes file>
New topology:
<contents of new.nodes file>
Nodes to add: p0n2, p1n2, p2n2
Nodes to remove: p0n0, p1n0, p2n0
Is this correct (y/n)? y
Applying changes...
p0n2 has joined
p2n0 has left
...
Changes applied successfully

Here's an example of what the output might be when you repartition the cluster:

Current topology:
<contents of current cluster.nodes file>
New topology:
<contents of new.nodes file>
Nodes to add: p3n0, p3n1
Is this correct (y/n)? y
CAUTION: the requested changes require repartitioning the search collection
The most recent checkpoint is checkpoint_xxx from 2004-04-22 16:00:00
Is this correct (y/n)? y
Repartitioning from 3 partitions into 4
0%
5%
<progress messages>
100%
Repartitioning successful
Applying changes...
p0n2 has joined
p2n0 has left
...
Changes applied successfully

If the repartition fails, the search collection leaves the cluster in its original state, if at all possible, and provides information about the failure. The cluster.nodes file is rolled back to the previous state after making sure that the last-known good checkpoint refers to an un-repartitioned checkpoint directory.

Aborting a Checkpoint or Reconfiguration Operation

You can abort a long-running checkpoint or cluster reconfiguration operation by exiting from the command line utility with Control-C. The cluster will be restored to its state prior to attempting the checkpoint or topology reconfiguration.

In the case of a checkpoint operation, the utility sends a "checkpoint abort" command to the checkpoint coordinator to cleanly abort the checkpoint create/restore operation.

In the case of a cluster reconfiguration, the utility restores the original cluster.nodes file and initiates a soft restart of the affected cluster nodes to restore the cluster to its previous configuration.

Creating Snapshot Queries

A snapshot query allows you to display the results of a query in a portlet or e-mail the results to users. You can select which repositories to search (including Publisher and Collaboration), and limit your search by language, object type, folder, property, and text conditions.

To create a snapshot query:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, select Snapshot Query.
  4. Complete the query and results format according to the online help.
  5. Click Finish.
  6. When prompted, specify a folder to which to save the snapshot query.
  7. The editor prompts you to send an invitation to view the query. Follow the editor instructions to do so.

Implementing Federated Searches

This section describes federated searches, which allow your users to search external repositories for content or allow users of other portals to search your portal for content. This section includes the following topics:

About Federated Searches

Federated searches connect separate AquaLogic Interaction portals with one another and with external repositories. Federated searches empower dispersed organizations to deploy multiple portals and link them together, thereby combining local control over content with global scope. Federated searches provide end-users a single interface and unified result set for searches over multiple AquaLogic Interaction portals, as well as parallel querying of external Internet and intranet-based search engines.

When you install the portal, the Public Access Incoming Federated Search is created. This allows other AquaLogic Interaction portals to search this portal as the Guest user.

To allow other search relationships, you must create new incoming or outgoing federated searches. Whether your portal is requesting or serving content, you and the other administrators involved need to agree upon the following issues prior to establishing federated searches:

Configuring Federated Searches

AquaLogic Interaction portals can use federated search to search other AquaLogic Interaction portals. To enable this, you must configure a trust relationship between the searching (outgoing) and searched (incoming) portals. To establish the trust relationship, the two participating portals must agree upon a name and password combination that will be used to ensure that requests are coming from a trusted source. This information is recorded as the portal identification name and password.

There are outgoing and incoming federated searches:

Configuring Outgoing Federated Searches

To create a search Web service:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Web Service - Search.
  4. Define the search Web service as described in the online help.
  5. Click Finish.

To create an outgoing federated search:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Federated Search - Outgoing.
  4. When prompted, select the search Web service.
  5. Define your outgoing federated search as described in the online help.
  6. Click Finish.
Configuring Incoming Federated Searches

To create an incoming federated search:

  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Federated Search - Incoming.
  4. Define your incoming federated search as described in the online help.
  5. Click Finish.

Searching Non-Portal Repositories

If there is a non-portal repository that you want to search, BEA or another vendor might have written a search Web service to access it. If not, BEA provides an Enterprise Web Development Kit that allows you to easily write your own Search Web services in Java or .NET. For details, visit the BEA AquaLogic User Interaction Development Center ( http://dev2dev.bea.com/aluserinteraction/).

To create an outgoing federated search that accesses a non-portal repository.

Table 4-9 Creating an Outgoing Federated Search (Non-Portal Repository)
Task
Steps
Install the search provider.
For details, refer to documentation from the search provider.
Configure a remote server.
  1. Click Administration.
  2. Navigate to or create the administrative folder for this content source.
  3. In the Create Object drop-down list, select Remote Server.
  4. Configure connection information for the remote server as described in the online help.
  5. Click Finish.
Configure a search Web service.
  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Web Service - Search.
  4. Define the search Web service as described in the online help.
  5. Click Finish.
Create an outgoing federated search.
  1. Click Administration.
  2. Open an administrative folder.
  3. In the Create Object drop-down list, click Federated Search - Outgoing.
  4. When prompted, select the search Web service.
  5. Define your outgoing federated search as described in the online help.
  6. Click Finish.


  Back to Top       Previous  Next