|
||||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |
See:
Description
Interface Summary | |
IContainer | An interface that allows the portal to systematically crawl a remote document repository by querying all documents and child containers (sub-nodes) that a particular container (node) contains and the respective users and groups that can access that particular container (node). |
IContainerProvider | An interface that allows the portal to iterate over a backend directory structure. |
ICrawlerLog | An interface that returns logging information to the portal. |
IDocument | An interface that allows the portal to query information about and retrieve documents from backend repositories. |
IDocumentProvider | An interface that allows the portal to specify documents for retrieval from a backend repository. |
Class Summary | |
ACLEntry | JavaBean representing a security domain and user or group. |
ChildContainer | JavaBean representing the |
ChildDocument | JavaBean representing the |
ChildRequestHint | Enumeration for portal child request queries. |
ContainerMetaData | Stores metadata information about the Container object. |
CrawlerConstants | Static constants related to crawlers. |
CrawlerInfo | A |
CWSLogger | A simple logging implementation for passing string messages back to the portal. |
DocumentFormat | Enumeration for the portal document format flag. |
DocumentMetaData | Stores metadata information about a Document object. |
TypeNamespace | Enumeration for the portal's Document Type Map namespaces. |
Provides classes and interfaces for crawling, indexing and representing documents from other systems in the WebCenter Interaction Knowledge Directory.
Crawlers are extensible components used to index documents from a specific type of document repository, including Lotus Notes, Microsoft Exchange, Documentum and Novell. Crawlers only import links to documents; the documents themselves are left in their original locations. Crawlers access content from an external repository and index it in the portal. Portal users can search for and open crawled files through the portal Knowledge Directory. Crawlers can be used to provide access to files on protected backend systems without violating access restrictions.
In ALI version 5.x and above, crawlers are implemented as remote services that use XML over SOAP and HTTP. Using the IDK, you can create remote crawlers that access a wide range of backend systems. The purposes of a crawler are threefold:
To view the SOAP endpoint to enter in the Crawler Web Service Settings of the Crawler Web Service,
open a browser and point it to http://<server>:<port>/<war_name>/services.
For example, http://express-job1.devnet.plumtree.com:8080/HelloWorldCrawler/services. The page should display "And now...Some Services." with a list of
all defined services.
The default endpoints are:
Container URL: http://<server>:<port>/<war_name>/services/ContainerProviderSoapBinding
Document URL: http://<server>:<port>/<war_name>/services/DocumentProviderSoapBinding
Service Configuration URL: is http://<server>:<port>/<war_name>/services/SciProviderSoapBinding
If the DeployServlet was used to create the services, you will see the services with a developer-supplied prefix.
|
||||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |
Copyright ©2010 Oracle® Corporation. All Rights Reserved.