|Oracle Ultra Search User's Guide
Part Number B10043-01
The Ultra Search administration tool lets you configure and schedule the Ultra Search crawler. This chapter contains the following topics:
The administration tool is a Web application for configuring and scheduling the Ultra Search crawler. The administration tool is typically installed on the same machine as your Web server. You can access the administration tool from any browser in your intranet, directly as an Ultra Search database user, or as a single sign-on (SSO) user with a SSO server.
The Ultra Search administration tool and the Ultra Search query applications are part of the Ultra Search middle tier components module. However, the Ultra Search administration tool is independent from the Ultra Search query application. Therefore, they can be hosted on different machines to enhance security or scalability.
With the administration tool, you use to do the following:
Chapter 2, "Installing and Configuring Ultra Search" for information about how to deploy the administration tool
To configure the Ultra Search crawler, you must do the following:
Use query options to let users limit their searches. Searches can be limited to document attributes and data groups.
Search attributes can be mapped to HTML metatags, table columns, document attributes, and email headers. Some attributes, such as author and description, are predefined and need no configuration. However, you can customize your own attributes. To set custom document attributes to expose to the query user, use the Attributes Page.
Data source groups are logical entities exposed to the search engine user. When entering a query, the search engine user is asked to select one or more data groups to search from. A data group consists of one or more data sources. To define data groups, use the Queries Page.
Ultra Search provides context-sensitive online help, based on the language setting in the Users Page. If the translated help files are not installed on the local machine, then English online help files are used.
To download the latest online help files, visit the Oracle Technology Network (OTN). You must register online before using OTN; registration is free and can be done at
If you already have a user name and password for OTN, then you can go directly to the documentation section of the OTN Web site at
The following users can log on to the Ultra Search administration tool:
IAS_ADMINusers [applicable in iAS]
To log on to the administration tool, point your Web browser to one of the following URLs:
Immediately after installation, the only users able to create and manage instances are the following:
IAS_ADMINEnterprise Manager user [applicable in iAS]
PORTALSSO user belonging to the default company [applicable in iAS]
ORCLADMINSSO user belonging to the default company [applicable in iAS]
After you are logged on as one of these special users, you can grant permission to other users, enabling them to create and manage Ultra Search instances. Using the Ultra Search administration tool, you can only grant and revoke Ultra Search related permissions to and from exiting users. To add or delete users, use the OID for single-sign-on users or Oracle Enterprise Manager for local database users.
Single sign-on (SSO) is available only with the Oracle9i Application Server (9iAS) release. It is not available with the Oracle9i database release.
When single sign-on (SSO) users log in to the SSO-protected Ultra Search administration tool through the Oracle Portal administration page, one of the following occurs:
You might need to grant super-user privileges, or privileges for managing an Ultra Search instance, to an SSO user. This process is slightly different, depending on whether Oracle Portal is running in hosted mode or non-hosted mode, as described in the following section:
An SSO user is uniquely identified by Ultra Search with an SSO-nickname/subscriber-nickname combination.
At any point after installation, an Oracle Portal script could be run to alter the running mode from non-hosted to hosted. Whenever this is performed, the Oracle Portal script invokes an Ultra Search script to inform Ultra Search of the change from non-hosted to hosted modes.
Hosting Developer's Guide at
After successfully logging on to the Ultra Search administration tool, you find yourself on the Instances Page. This page manages all Ultra Search instances in the local database. In the top left corner of the page, there are tabs for creating, selecting, deleting, and editing instances.
Before you can use the administration tool to configure crawling and indexing, you must create an Ultra Search instance. An Ultra Search instance is identified with a name and has its own crawling schedules and index. Only users granted the
WKADMIN role can create Ultra Search instances.
To create an instance, select the Create tab on the Instances Page. This takes you to another page with links for creating a regular instance (a master instance) and creating a read-only snapshot instance. Only Ultra Search super-users can create new instances.
If the search domains of Ultra Search instances overlap, then there could be crawling conflict for table data sources with logging enabled, email data sources, and some user-defined data sources.
To create an instance, do the following:
Every Ultra Search instance exists in one and only one database user/schema. To create a new Ultra Search instance, you first must have a database user that has been configured for Ultra Search and that does not already contain an Ultra Search instance.
The database user you create to house the Ultra Search instance should be assigned a dedicated self-contained tablespace. This is important if you plan to ever create snapshot instances of this instance. To do this, create a new tablespace. Then, create a new database user whose default tablespace is the one you just created.
From the main instance creation page, select the "Create instance" link, and provide the following information:
You can also enter the following optional index preferences:
Specify the name of the lexer you want to use for indexing. The default lexer is
wk_lexer, as defined in the
sql file. After the instance is created, the lexer can no longer be changed.
Specify the name of a stoplist you want to use during indexing. The default stoplist is
wk_stoplist, as defined in the
sql file. Try to avoid modifying the stoplist after the instance has been created.
Specify the name of the storage preference for the index of your instance. The default storage preference is
wk_storage, as defined in the
sql file. After the instance is created, the storage preference cannot be changed.
A snapshot instance is a copy of another instance. Unlike a regular instance, a snapshot instance is read-only; it does not synchronize its index to the search domain. Also, after the master instances re-synchronizes to the search domain, the snapshot instance becomes out of date. At that point, you should delete the snapshot and create a new one.
A snapshot instance is useful for the following:
Two Ultra Search instances can answer queries about the same search domain. Therefore, in a set amount of time, two instances can answer more queries about that domain than one instance. Because snapshot instances do not involve crawling and indexing, snapshot instance creation is fast and inexpensive. Thus, snapshot instances can improve scalability.
If the master instance gets corrupted, its snapshot can be transformed into a regular instance by editing the instance mode to updatable. Because the snapshot and its master instance cannot reside on the same database, a snapshot instance should be made updatable only to replace a corrupted master instance.
A snapshot instance does not inherit authentication from the master instance. Therefore, if you make a snapshot instance updatable, you must reenter any authentication information needed to crawl the search domain.
To create a snapshot instance, do the following:
As with regular instances, snapshot instances require a database user that has been configured for Ultra Search and that does not already contain an Ultra Search instance.
This is done with the transportable tablespace mechanism, which does not allow renaming of tablespaces. Therefore, snapshot instances cannot be created on the same database as its master.
Identify the tablespace or the set of tablespaces that contain all the master instance data. Then, copy it, and plug it into the database user from step 1.
From the main instance creation page, select the "Create read-only snapshot instance" link, and provide the following information:
After providing this information, click Apply.
You can have multiple Ultra Search instances. For example, an organization could have separate Ultra Search instances for its marketing, human resources, and development portals. The administration tool requires you to specify an instance before it lets you make any instance-specific changes.
To select an instance, do the following:
To delete an instance, do the following:
To edit an instance, click the Edit tab on the Instances Page. You can change the instance mode (make it instance updatable) or change the instance password.
You can change the instance mode to updatable or read-only. Updatable instances synchronize themselves to the search domain on a set schedule, whereas read-only instances (snapshot instances) do not do any synchronization. To set the instance mode, select the box corresponding the to mode you want, and click Apply.
An Ultra Search instance must know the password of the database user in which it resides. The instance cannot get this information directly from the database. During instance creation, Oracle provides the database user password, and the instance caches this information.
If this database user password changes, then the password that the instance has cached must be updated. To do this, enter the new password and click Apply. After the new password is verified against the database, it replaces the cached password.
The Ultra Search crawler is a Java application that spawns threads to crawl defined data sources, such as Web sites, database tables, or email archives. Crawling occurs at regularly scheduled intervals, as defined in the Schedules Page.
With this page, you can do the following:
Specify the number of crawler threads to be spawned at run time.
Specify the number of central processing units (CPUs) that exist on the server where the Ultra Search crawler will run. This setting determines the optimal number of document conversion threads used by the system. A document conversion thread converts multiformat documents into HTML documents for proper indexing.
Not all documents retrieved by the Ultra Search crawler specify the language. For documents with no language specification, the Ultra Search crawler attempts to automatically detect language. Click Yes to turn on this feature.
The language recognizer is trained statistically using trigram data from documents in various languages (Danish, Dutch, English, French, German, Italian, Portuguese, and Spanish). It starts with the hypothesis that the given document does not belong to any language and ultimately refutes this hypothesis for a particular language where possible. It operates on Latin-1 alphabet and any language with a deterministic Unicode range of characters (Chinese, Japanese, Korean, and so on.).
The crawler determine the language code by checking the HTTP header content-language or the LANGUAGE column, if it is a table data source. If it cannot determine the language, then it takes the following steps:
This language code is populated in 'LANG' column of the
wk$doc tables. Multi-lexer is the only lexer used for Ultra Search. All document URLs are stored in
wk$doc for indexing and
wk$url for crawling.
If automatic language detection is disabled, or when a Web document does not have a specified language, the crawler assumes that the Web page is written in this default language. This setting is important, because language directly determines how a document is indexed.
This default language is used only if the crawler cannot determine the document language during crawling. Set language preference in the Users Page.
You can select a default language for the crawler or for data sources. Default language support for indexing and querying is available for the following languages:
A Web document could contain links to other Web documents, which could contain more links. This setting lets you specify the maximum number of nested links the crawler will follow.
Appendix A, "Tuning the Web Crawling Process" for more information on the importance of the crawling depth
Specify in seconds a crawler timeout. The crawler timeout threshold is used to force a timeout when the crawler cannot access a Web page.
Specify the default character set. The crawler uses this setting when an HTML document does not have its character set specified.
Temporary Directory Location and Size
Specify a temporary directory and size. The crawler uses the temporary directory for intermittent storage during indexing. Specify the absolute path of the temporary directory. The size is the maximum temporary space in megabytes that will be used by the crawler.
The size of the temporary directory is important because it affects index fragmentation. The smaller the size, the more fragmented the index. As a result, the query will be slower, and index optimization needs to be performed more frequently. Increasing the directory size reduces index fragmentation, but it also reduces crawling throughput (total number of documents crawled each hour). This is because it takes longer to index a bigger temporary directory, and the crawler needs to wait for the indexing to complete before it can continue writing new documents to the directory.
Specify the following:
The log file directory stores the crawler log files. The log file records all crawler activity, warnings, and error messages for a particular schedule. It includes messages logged at startup, runtime, and shutdown. Logging everything can create very large log files when crawling a large number of documents. However, in certain situations, it can be beneficial to configure the crawler to print detailed activity to each schedule log file. The crawler logfile language is the language the crawler uses to generate the log file.
The database connect string is a standard JDBC connect string used by the remote crawler when it connects to the database. The connect string can be provided in the form of [hostname]:[port]:[sid] or in the form of a TNS keyword-value syntax; for example,
In a Real Application Clusters environment, the TNS keyword-value syntax should be used, because it allows connection to any node of the system. For example,
"(DESCRIPTION=(LOAD_ BALANCE=yes)(ADDRESS=(PROTOCOL=TCP)(HOST=cls02a)(PORT=3001)) (ADDRESS=(PROTOCOL=TCP)(HOST=cls02b)(PORT=3001)))(CONNECT_DATA=(SERVICE_ NAME=sales.us.acme.com)))"
Use this page to view and edit remote crawler profiles. A remote crawler profile consists of all parameters needed to run the Ultra Search crawler on a remote machine other than the Ultra Search database. A remote crawler profile is identified by the hostname. The profile includes the cache, log, and mail directories that the remote crawler shares with the database machine.
To set these parameters, click Edit. Enter the shared directory paths as seen by the remote crawler. You must ensure that these directories are shared or mounted appropriately.
Use this page to view the following crawler statistics:
This provides a general summary of crawler activity:
This includes the following:
This displays crawler progress for the past week. It shows the total number of documents that have been indexed for exactly one week prior to the current time. The Time column rounds the current time to the nearest hour.
This lists errors encountered during the crawling process. It also lists the number of URLs that cause each error.
Use this page to set up basic authentication and proxies.
The Ultra Search crawler provides basic authentication information to hosts that require it. Basic authentication is based on the model that the client must authenticate itself with a username and a password for each realm. A realm is a string that identifies a set of protected URLs on a Web server. Enter the host, realm, username, and password, and click Add.
Specify a proxy server if the search space includes Web pages that reside outside your organization's firewall. Specifying a proxy server is optional. Currently, only the HTTP protocol is supported.
You can also set domain exceptions.
When your indexed documents contain metadata, such as author and date information, you can let users refine their searches based on this information. For example, users can search for all documents where the author attribute has a certain value.
The list of values (LOV) for a document attribute can help specify a search query. An attribute value can have a display name for it. For example, the attribute country might use country code as the attribute value, but show the name of the country to the user. There could be multiple translations of the attribute display name.
To define a search attribute, use the Search Attributes subtab. Ultra Search provides some system-defined attributes, such as "Author" and "Description." You can also define your own.
After defining search attributes, you must map between document attributes and global search attributes for data sources. To do so, use the Mappings subtab.
Ultra Search provides a command-line tool to load metadata, such as search attribute LOVs and display names into an Ultra Search database. If you have a large amount of data, this is probably faster than using the HTML-based administration tool. For more information, see Appendix E, "Loading Metadata into Ultra Search".
Search attributes are attributes exposed to the query user. Ultra Search provides system-defined attributes, such as "Author" and "Description." Ultra Search maintains a global list of search attributes. You can add, edit, or delete search attributes. You can also click Manage LOV to change the list of values (LOV) for the search attribute. There are two categories of attribute LOVs: one is global across all data sources, the other is data source-specific.
To define your own attribute, enter the name of the attribute in the text box; select string, date, or number; and click Add.
You can add or delete LOV entry and display name for search attributes. Display name is optional. If display name is absent, then LOV entry is used in the query screen.
LOV is only represented as string type. If LOV is in date format, then you must use "DD-MM-YYYY" to enter the LOV.
To update the policy value, click the Manage LOV icon for any attribute.
A data source-specific LOV can be updated in three ways:
This section displays mapping information for user-defined sources. Mapping is done at the agent level, and document attributes are automatically mapped to search attributes with the same name initially. Document attributes and search attributes are mapped one-to-one. For each user-defined data source, you can edit which global search attribute the document attribute is mapped to.
For Web or table data sources, mappings are created manually when you create the data source. For user-defined data sources, mappings are automatically created on subsequent crawls.
Click Edit mappings to change this mapping.
Editing the existing mapping is costly, because the crawler must recrawl all documents for this data source. You should avoid this step, unless necessary.
There are no user-managed mappings for email sources. There are two predefined mappings for emails. The "From" field of an email is intrinsically mapped to the Ultra Search "Author" attribute. Likewise, the "Subject" field of an email is mapped to the Ultra Search "Subject" attribute. The abstract of the email message is mapped to the "Description" attribute.
A collection of documents is called a source. The data source is characterized by the properties of its location, such as a Web site or an email inbox. The Ultra Search crawler retrieves data from one or more data sources.
The different types of sources are:
You can create as many data sources as you want. The following section explains how to create and edit data sources.
A Web source represents HTML content on a specific Web site. Web sources differ from other data source types because they exist specifically to facilitate maintenance crawling of specific Web sites.
To create a new Web source, do the following:
Robots exclusion lets you control which parts of your sites can be visited by robots. If robots exclusion is enabled (default), then the Web crawler traverses the pages based on the access policy specified in the Web server
txt file. For example, when a robot visits http://www.foobar.com/, it checks for http://www.foobar.com/robots.txt. If it finds it, the crawler analyzes its contents to see if it is allowed to retrieve the document. If you own the Web sites, then you can disable robots exclusions. However, when crawling other Web sites, you should always comply with
txt by enabling robots exclusion.
The URL Rewriter is a user-supplied java module for implementing the Ultra Search UrlRewriter interface. It is used by the crawler to filter or rewrite extracted URL links before they are put into the URL queue. URL filtering removes unwanted links, and ULR rewriting transforms the URL link. This transformation is necessary when access URLs are used.
The UrlRewriter provides the following possible outcomes for links:
The generated new "url link" is subject to all existing host, path, and mimetype inclusion and exclusion rules.
You must put the implemented rewriter class in a jar file and provide the class name and jar file name here.
A table source represents content in a database table or view. The database table or view can reside in the Ultra Search database instance or in a remote database. Ultra Search accesses remote databases using database links.
To create a table source, click Create new table source, and follow these steps:
The Table Column to Key Mappings section provides mapping information. Ultra Search supports table keys in
DATE type. If key1 is of
DATE type, then you must specify the format model used by the Web site so that Oracle knows how to interpret the string. For example, the date format model for the string '11-Nov-1999' is 'DD-Mon-YYYY'. You can also map other table columns to Ultra Search attributes. Do not map the text column.
Oracle9i SQL Reference for more on format models
Click Edit to change the name of the table source; change, add, or delete table column and search attribute mappings; change the display URL template or column; and view values of the table source settings.
If a table source has more than one table, then a view joining the relevant tables must be created. Ultra Search then uses this view as the table source. For example, two tables with a master-detail relationship can be joined through a select statement on the master table and a user-implemented PL/SQL function that concatenate the detail table rows.
The following restrictions apply to base tables or views on a remote database that are accessed over a database link by the crawler.
CLOB, then the table must have a
ROWIDcolumn. A table or view might not have a
ROWIDcolumn for various reasons, including the following:
The best way to know if a remote table or view can be safely crawled by Ultra Search is to check for the existence of the
ROWID column. To do so, run the following SQL statement against that table or view using SQL*Plus:
An email source derives its content from emails sent to a specific email address. When the Ultra Search crawler searches an email source, it collects all emails that have the specific email address in any of the "To:" or "Cc:" email header fields.
The most popular application of an email source is where an email source represents all emails sent to a mailing list. In such a scenario, multiple email sources are defined where each email source represents an email list.
To crawl email sources, you need an IMAP account. At present, the Ultra Search crawler can only crawl one IMAP account. Therefore, all emails to be crawled must be found in the inbox of that IMAP account. For example, in the case of mailing lists, the IMAP account should be subscribed to all desired mailing lists. All new postings to the mailing lists are sent to the IMAP email account and subsequently crawled. The Ultra Search crawler is IMAP4 compliant.
When the Ultra Search crawler retrieves an email message, it deletes the email message from the IMAP server. Then, it converts the email message content to HTML and temporarily stores that HTML in the cache directory for indexing. Next, the Ultra Search crawler stores all retrieved messages in a directory known as the archive directory. The email files stored in this directory are displayed to the search end-user when referenced by a query hit.
To crawl email sources, you must specify the username and password of the email account on the IMAP server. Also specify the IMAP server hostname and the archive directory.
To create email sources, you must enter an email address and a description. The description can be viewed by all search end-users, so you should specify a short but meaningful name. When you create (register) an email source, the name you use is the email of the mailing list. If the emails are not sent to one of the registered mailing lists, then those emails are not crawled.
You can specify email address aliases for an email source. Specifying an alias for an email source causes all emails sent to the main email address, as well as the alias address, to be gathered by the crawler.
A file source is the set of documents that can be accessed through the file protocol on the Ultra Search database machine or on a remote crawler machine.
To edit the name of a file source, click Edit.
To create a new file source, do the following:
Ultra Search displays file data sources in text format by default. However, if you specify display URL for the file data source, then Ultra Search uses the URL to display the file data source.
With display URL for file data sources, the URL uses network protocols, such as HTTP or HTTPS, to access the file data source. To generate display URL for the file data source, specify the prefix of the original file URL and the prefix of the display URL. Ultra Search replaces the prefix of the file URL with the prefix of the display URL.
For example, if your file URL is file:///home/archive/<sub_dir_name>/<file_name> and the display URL is https://host:7777/private/<sub_dir_name>/<file_name>, then you can specify the file URL prefix to file:///home/archive and the display URL prefix to https://host:7777/private.
Ultra Search lets you define, edit, or delete your own data sources and types in addition to the ones provided. You might implement your own crawler agent to crawl and index a proprietary document repository or management system, such as Lotus Notes or Documentum, which contain their own databases and interfaces.
For each new data source type, you must implement a crawler agent as a Java class. The agent collects document URLs and associated metadata from the proprietary document source and returns the information to the Ultra Search crawler, which enques it for later crawling.
To define a new data source, you first define a data source type to represent it. You define the type name, the crawler agent java class/jar file, and parameters to be used, such as starting address. After you define your data type, define a new data source by specifying parameter values.
To create a new user-defined data source, click Create new source. To create, edit, or delete data source types, click Manage types.
To create a user-defined data source type:
Edit data source type information by changing the data source type name, description, crawler agent Java class/jar file name, or parameters.
To create a user-defined data source:
Edit user-defined data sources by changing the name, type, default language, or starting address.
Ultra Search supports the crawling and indexing of Oracle9i Application Server (9iAS) Portal installations. This enables searching across multiple portal installations. To crawl a 9iAS Portal, you must first register your portal with Ultra Search. To register your portal:
After registering your portal, select the Oracle 9iAS Portal page groups you want to index. Each page group chosen is created as a 9iAS portal source.
You can edit the types of documents the Ultra Search crawler should process for a portal source. HTML and plain text are default document types that the crawler will always process. Edit document types by clicking the edit icon of the portal source after it has been created.
Use this page to schedule data synchronization and index optimization. Data synchronization means keeping the Ultra Search index up to date with all data sources. Index optimization means keeping the updated index optimized for best query performance.
The tables on this page display information about synchronization schedules. A synchronization schedule has one or more data sources assigned to it. The synchronization schedule frequency specifies when the assigned data sources should be synchronized. Schedules are sorted first by name. Within a synchronization schedule, individual data sources are listed and can be sorted by source name or source type.
To create a new schedule, click Create New Schedule and follow these steps:
Update the indexing option in the Update Schedule page. If you decide to examine URLs before indexing for the schedule, then after you run the schedule, the schedule status is shown as "Indexing pending."
In data harvesting mode, you should begin crawling first. After crawling is done, click Examine URL to examine document URLs and status, remove unwanted documents, and start indexing. After you click Begin index, you see schedule status change from launching, executing, scheduled, and so on.
After you click the link for a specific host, you see list of document URLs that have been crawled for the host. You can delete document URLs in this section.
After a synchronization schedule has been defined, you can do the following in the Synchronization Schedules List:
The crawler behaves differently for the documents collected.
Crawling mode and recrawl policy can be combined for six different combinations. For example, "process all documents" and "index only" forces reindexing existing documents in this data source, while "process document that have changed" and "index only" re-indexes only changed documents.
You can launch a synchronization schedule in the following ways:
Launching a synchronization schedule could take a very long time. If a schedule has been launched before, then the next time a schedule is launched, all URLs that belong to the data source(s) to be crawled by the schedule are copied over into a queue table. Depending on the number of URLs associated with that data source, the copy operation can potentially take a long time. The administration tool displays the schedule state as 'Launching' during the entire time.
Click the link in the status column to see the synchronization schedule status. To see the crawling progress for any data source associated with this schedule, click the statistics icon.
The crawling progress contains the following information:
It also contains the following statistics:
To ensure fast query results, the Ultra Search crawler maintains an active index of all documents crawled over all data sources. This page lets you schedule when you would like the index to be optimized. The index should be optimized during hours of low usage.
You can specify the index optimization schedule frequency. Be sure to specify all required data for the option that you select. You can optimize the index immediately, or you can enable the schedule.
Specify a maximum duration for the index optimization process. The actual time taken for optimization will not exceed this limit, but it could be shorter. Specifying a longer optimization time will result in a more optimized index. Alternatively, you can specify that the optimization continue until it is finished.
This section lets you specify query-related settings, such as data source groups, URL submission, relevancy boosting, and query statistics.
Data source groups are logical entities exposed to the search engine user. When entering a query, the user is asked to select one or more data groups to search from. Use this page to define these data groups.
A data group consists of one or more data sources. Data source can be assigned to multiple data groups. Data groups are sorted first by name. Within each data group, individual data sources are listed and can be sorted by source name or source type.
To create a new data source group, do the following:
URL submission lets query users submit URLs. These URLs are added to the seed URL list and included in the Ultra Search crawler search space. You can allow or disallow query users to submit URLs.
URLs are submitted to a specific Web data source. URL boundary rules checking ensures that submitted URLs comply with the URL boundary rules of the web data source. You can allow or disallow URL boundary rules checking.
Relevancy boosting lets administrators override the search results and influence the order that documents are ranked in the query result list. This can be used to promote important documents to higher scores. It also makes them easier to find.
There are two methods for locating URLs for relevancy boosting: locate by search or manual URL entry.
To boost a URL, first locate a URL by performing a search. You can specify a hostname to narrow the search. After you have located the URL, click Information to edit the query string and score for the document.
If a document has not been crawled or indexed, then it cannot be found in a search. However, you can provide a URL and enter the relevancy boosting information with it. To do so, click Create, and enter the following:
The document is searchable after the document is loaded for the term. The document is also indexed the next time the schedule is run.
With manual URL entry, you can only assign URLs for Web data sources. Users will get an error message on this page if no Web data source is defined.
Ultra Search provides a command-line tool to load metadata, such as document relevance boosting, into an Ultra Search database. If you have a large amount of data, this is probably faster than using the HTML-based administration tool. For more information, see Appendix E, "Loading Metadata into Ultra Search".
This section lets you enable or disable the collection of query statistics. The logging of query statistics reduces query performance. Therefore, Oracle recommends that you disable the collection of query statistics during regular operation.
After you enable query statistics, the table that stores statistics data is truncated every Sunday at 1:00 A.M.
If query statistics is enabled, you can click one of the following categories:
This summarizes all query activity on a daily basis. The statistics gathered are:
This summarizes the 50 most frequent queries that occurred in the past 24 hours.
This summarizes the 50 most frequent queries that occurred in the past 24 hours. Each row in the table describes statistics for a particular query string.
This summarizes the top 50 queries that failed over the past 24 hours. A failed query is one where the search engine end-user did not locate any query results.
The columns are:
Use this page to manage Ultra Search administrative users. You can assign a user to manage an Ultra Search instance. You can also select a language preference.
This section lets you set preference options for the Ultra Search administrator.
You can specify the date and time format. The pull-down menu lists the following languages:
You can also select the number of rows to display on each page.
A user with super-user privileges can perform all administrative functions on all instances, including creating instances, dropping instances, and granting privileges. Only super-users can access this page.
To grant super-user administrative privileges to another user, specify the user name and type. Specify also whether the user should be allowed to grant super-user privileges to other users. Then click Add.
Only instance owners, users that have been granted general administrative privileges on this instance, or super-users are allowed to access this page. Instance owners must have been granted the
Granting general administrative privileges to a user allows that user to modify general settings for this instance. To do this, specify the user name and type. Specify also whether the user should be allowed to grant administrative privileges to other users. Then click Add.
To remove one ore more users from the list of administrators for this instance, select one or more usernames from the list of current administrators and click Remove.
General administrative privileges do not include the ability to:
These privileges belong to super-users.
Ultra Search lets you translate names to different languages. This page lets you enter multiple values for search attributes, list of values (LOV) display names, and data groups.
This section lets you translate attribute display names to different languages. The pull-down menu lists the following languages:
This section lets you translate data group names to different languages. Select a search attribute from the pull-down menu: author, description, mimetype, subject, or title. Select the LOV type, and then select the language from the pull-down menu. The pull-down menu lists the language options.
This section lets you translate data group display names to different languages. The pull-down menu lists the language options.