|
||||||||||
| PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES | |||||||||
See:
Description
| Interface Summary | |
| IContainer | An interface that allows the portal to systematically crawl a remote document repository by querying all documents and child containers (sub-nodes) that a particular container (node) contains and the respective users and groups that can access that particular container (node). |
| IContainerProvider | An interface that allows the portal to iterate over a backend directory structure. |
| ICrawlerLog | An interface that returns logging information to the portal.An instance of this interface will be passed into the service's initialize calls. |
| IDocument | An interface that allows the portal to query information about and retrieve documents from backend repositories. |
| IDocumentProvider | An interface that allows the portal to specify documents for retrieval from a backend repository. |
| Class Summary | |
| ACLEntry | JavaBean representing a security domain and user or group. |
| ChildContainer | JavaBean representing the |
| ChildDocument | JavaBean representing the |
| ChildRequestHint | Enumeration for portal child request queries. |
| ContainerMetaData | Stores metadata information about the Container object. |
| CrawlerConstants | Static constants related to crawlers. |
| CrawlerInfo | A |
| CWSLogger | A simple logging implementation for passing string messages back to the portal. |
| DocumentFormat | Enumeration for the portal document format flag. |
| DocumentMetaData | Stores metadata information about a Document object. |
| TypeNamespace | Enumeration for portal's Document Type Map namespaces. |
Provides classes and interfaces for crawling in, indexing and representing documents from other systems, within the Knowledge Directory of the Plumtree Corporate Portal.
Crawlers are extensible components used to index documents from a specific type of document repository, including Lotus Notes, Microsoft Exchange, Documentum and Novell. Crawlers only import links to documents; the documents themselves are left in their original locations. Crawlers access content from an external repository and index it in the portal. Portal users can search for and open crawled files through the portal Knowledge Directory. Crawlers can be used to provide access to files on protected back-end systems without violating access restrictions.
In version 5.0, crawlers are implemented as remote services that use XML over SOAP and HTTP. Using the EDK, you can create remote crawlers that access a wide range of back-end systems. The purposes of a crawler are threefold:
To view the SOAP endpoint to enter in the Crawler Web Service Settings of the Crawler Web Service, open a browser and point it to http://<my_server>:<my_port>/<war_name>/services. For example, http://express-job1.devnet.plumtree.com:8080/HelloWorldCrawler/services. The page should display "And now...Some Services." with a list of all defined services. The default endpoint for Container URL is http://<my_server>:<my_port>/<war_name>/services/ContainerProviderSoapBinding, for Document URL is http://<my_server>:<my_port>/<war_name>/services/DocumentProviderSoapBinding, and for Service Configuration URL is http://<my_server>:<my_port>/<war_name>/services/SciProviderSoapBinding. If the DeployServlet was used to create the services, you will see the services with a developer-supplied prefix.
|
||||||||||
| PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES | |||||||||
Copyright ©2007 BEA Systems, Inc. All Rights Reserved.