Package com.plumtree.remote.crawler

Provides classes and interfaces for crawling, indexing and representing documents from other systems in the WebCenter Interaction Knowledge Directory.


Interface Summary
IContainer An interface that allows the portal to systematically crawl a remote document repository by querying all documents and child containers (sub-nodes) that a particular container (node) contains and the respective users and groups that can access that particular container (node).
IContainerProvider An interface that allows the portal to iterate over a backend directory structure.
ICrawlerLog An interface that returns logging information to the portal.
IDocument An interface that allows the portal to query information about and retrieve documents from backend repositories.
IDocumentProvider An interface that allows the portal to specify documents for retrieval from a backend repository.

Class Summary
ACLEntry JavaBean representing a security domain and user or group.
ChildContainer JavaBean representing the ChildContainer data type.
ChildDocument JavaBean representing the ChildDocument data type.
ChildRequestHint Enumeration for portal child request queries.
ContainerMetaData Stores metadata information about the Container object.
CrawlerConstants Static constants related to crawlers.
CrawlerInfo A NamedValueMap for storing information about Crawler settings.
CWSLogger A simple logging implementation for passing string messages back to the portal.
DocumentFormat Enumeration for the portal document format flag.
DocumentMetaData Stores metadata information about a Document object.
TypeNamespace Enumeration for the portal's Document Type Map namespaces.

Package com.plumtree.remote.crawler Description

Provides classes and interfaces for crawling, indexing and representing documents from other systems in the WebCenter Interaction Knowledge Directory.

Crawlers are extensible components used to index documents from a specific type of document repository, including Lotus Notes, Microsoft Exchange, Documentum and Novell. Crawlers only import links to documents; the documents themselves are left in their original locations. Crawlers access content from an external repository and index it in the portal. Portal users can search for and open crawled files through the portal Knowledge Directory. Crawlers can be used to provide access to files on protected backend systems without violating access restrictions.

In ALI version 5.x and above, crawlers are implemented as remote services that use XML over SOAP and HTTP. Using the IDK, you can create remote crawlers that access a wide range of backend systems. The purposes of a crawler are threefold:

  1. Iterate over and catalog a hierarchical data repository.
  2. Retrieve and index metadata about each document in the data repository and include it in the portal Knowledge Directory and search index. (After the documents are indexed, the crawler is still required.)
  3. Retrieve individual documents on demand through the portal Knowledge Directory, enforcing any user-level access restrictions.

To view the SOAP endpoint to enter in the Crawler Web Service Settings of the Crawler Web Service, open a browser and point it to http://<server>:<port>/<war_name>/services. For example, The page should display "And now...Some Services." with a list of all defined services. The default endpoints are:
Container URL: http://<server>:<port>/<war_name>/services/ContainerProviderSoapBinding
Document URL: http://<server>:<port>/<war_name>/services/DocumentProviderSoapBinding
Service Configuration URL: is http://<server>:<port>/<war_name>/services/SciProviderSoapBinding If the DeployServlet was used to create the services, you will see the services with a developer-supplied prefix.

For additional information on the Oracle® WebCenter Interaction Development Kit, including tutorials, blogs, code samples and more, see the Oracle Technology Network (

Copyright ©2010 Oracle® Corporation. All Rights Reserved.