|Oracle® Secure Enterprise Search Administrator's Guide
11g Release 2 (11.2.2)
Part Number E23427-01
|PDF · Mobi · ePub|
Your Web crawling strategy can be as simple as identifying a few well-known sites that are likely to contain links to most of the other intranet sites in your organization. You could test this by crawling these sites without indexing them. After the initial crawl, you have a good idea of the hosts that exist in your intranet. You could then define separate Web sources to facilitate crawling and indexing on individual sites.
However, the process of discovering and crawling your organization's intranet, or the Internet, is generally an interactive one characterized by periodic analysis of crawling results and modification to crawling parameters. For example, if you observe that the crawler is spending days crawling one Web host, then you might want to exclude crawling at that host or limit the crawling depth.
This section contains the most common things to consider to improve crawl performance:
See Also:"Monitoring the Crawling Process" for more information on crawling parameters
The Failed Schedules section on the Home - General page lists all schedules that have failed. A failed schedule is one in which the crawler encountered an irrecoverable error, such as an indexing error or a source-specific login error, and cannot proceed. A failed schedule could be because of a partial collection and indexing of documents.
The smallest granularity of the schedule interval is one hour. For example, you cannot start a schedule at 1:30 am.
If a crawl takes longer to finish than the scheduled interval, then it starts again when the current crawl is done. There is no option to have the scheduled time automatically pushed back to the next scheduled time.
When multiple sources are assigned to one schedule, the sources are crawled one by one following the order of their assignment in the schedule.
The schedule starts crawling the assigned sources in the assigned order. Only one source is crawling under a schedule at any given time. If a source crawl fails, then the rest of the sources assigned after it are not crawled. The schedule does not restart. You must either resolve the cause of the failure and resume the schedule, or remove the failed source from the schedule.
There is no automatic e-mail notification of schedule success or failure.
For more information about documents that the crawler does not index:
Browse the crawler log in the Oracle SES Administration GUI. Select the Schedules subtab from the Home page, then click the Log File icon for the schedule.
In the Oracle SES Administration GUI, select the Statistics subtab from the Home page. Under Crawler Statistics, choose Problematic URLs. This page lists errors encountered during the crawling process and the number of URLs that caused each error.
In Oracle Fusion Applications, the schedule is executed by Oracle Enterprise Scheduling System (ESS) using the time zone of the middle tier. The schedule is not affected by failed crawls, so the next crawl still begins as scheduled. If you modify the schedule in Oracle SES, the revised schedule overwrites the previous one in Enterprise Scheduler. If you deactivate a schedule, then the schedule is canceled in Enterprise Scheduler.
In the Oracle Enterprise Scheduling System (ESS) log, indexed documents have a 200 status code, and documents that are not indexed have a different status code.
See Also:Oracle Fusion Applications Administrator's Guide for information about managing Oracle Enterprise Scheduler Service and Jobs
If the scheduling requests are stuck, that is, if the crawlers are not progressing as expected, then you cannot perform operations such as start, stop, or delete on those scheduling requests. You must change the status of those schedules to
Failed, and then restart those schedules.
To recover stuck scheduling requests:
Log on to the Enterprise Manager Fusion Middleware Control console for the Common Domain.
In the left panel, expand Scheduling Services and click ESSAPP (server name).
In the top panel, click Scheduling Service to open a cascading menu.
Click Job Requests, then Search Job Requests to display the Request Search page.
In the Search box for Application, select SearchEssApp.
For Status, select Error Manual Recovery.
Click Search to see a list of stuck scheduling requests.
Fix each stuck request by taking these steps:
In the Request ID column, click a linked number.
In the top right corner, click Action.
Click Recover Stuck Request.
The following message is displayed:
Request Details: 18201(Error) #Request processing resulted in error with the below message. ERROR_MANUAL_RECOVERY: Request could not be recovered #The job request failed due to System error. #The internal processing of the job request is still not complete. To recover the request, click Action and select Recover Stuck Request.
After you fix all of the ESS requests and the Oracle SES schedule has a status of Failed, you can manage the schedules again.
By default, Oracle SES is configured to crawl Web sites in the intranet, so no additional configuration is required. However, to crawl Web sites on the Internet (also referred to as external Web sites), Oracle SES needs the HTTP proxy server information.
To register a proxy:
On the Global Settings page under Sources, select Proxy Settings.
Enter the proxy server name and port. Click Set Proxy.
Enter the internal host name suffix under Exceptions, so that internal Web sites do not go through the proxy server. Click Set Domain Exceptions.
To exclude the entire domain, omit
http, begin with
*., and use the suffix of the host name. For example,
*.example.com. Entries without the
*. prefix are treated as a single host. Use the IP address only when the URL crawled is also specified using the IP for the host name. They must be consistent.
If the proxy requires authentication, then enter the proxy authentication information on the Global Settings - Authentication page.
The seed URL you enter when you create a source is turned into an inclusion rule. For example, if
www.example.com is the seed URL, then Oracle SES creates an inclusion rule that only URLs containing the string
www.example.com are crawled.
However, suppose that the example Web site includes URLs starting with
example.com (without the
www). Many pages have a prefix on the site name. For example, the investor section of the site has URLs that start with
Always check the inclusion rules before crawling, then check the log after crawling to see what patterns have been excluded.
In this case, you might add
investor.example.com to the inclusion rules. Or you might just add
To crawl outside the seed site (for example, if you are crawling
text.us.oracle.com, but you want to follow links outside of
text.us.oracle.com to oracle.com), consider removing the inclusion rules completely. Do so carefully. This action could lead the crawler into many, many sites.
If no boundary rule is specified, then crawling is limited to the underlying file system access privileges. Files accessible from the specified seed file URL are crawled, subject to the default crawling depth. The depth, which is 2 by default, is set on the Global Settings - Crawler Configuration page. For example, if the seed is
file://localhost/home/user_a/, then the crawl picks up all files and directories under
user_a with access privileges. It crawls any documents in the directory
/home/user_a/level1 due to the depth limit. The documents in the
/home/user_a/level1/level2 directory are at
The file URL can be in UNC (universal naming convention) format. The UNC file URL has the following format for files located within the host computer:
For example, specify
stcisfcr is the name of the host computer.
localhost is optional. You can specify the URL path without the string
localhost in the URL, in which case the URL format is:
Note that you cannot use the UNC format to access files on other computers.
On some computers, the path or file name could contain non-ASCII and multibyte characters. URLs are always represented using the ASCII character set. Non-ASCII characters are represented using the hexadecimal representation of their UTF-8 encoding. For example, a space is encoded as
%20, and a multibyte character can be encoded as
You can enter spaces in simple (not regular expression) boundary rules. Oracle SES automatically encodes these URL boundary rules. For example,
Home Alone is specified internally as
Home%20Alone. Oracle SES does this encoding for the following:
File source simple boundary rules
URL string tests
File source seed URLs
Oracle SES does not alter regular expression rules. You must ensure that the regular expression rule specified is against the encoded file URL. Spaces are not allowed in regular expression rules.
Indexing dynamic pages can generate too many URLs. From the target Web site, manually navigate through a few pages to understand what boundary rules should be set to avoid crawling of duplicate pages.
You can control which parts of your sites can be visited by robots. If robots exclusion is enabled (the default), then the Web crawler traverses the pages based on the access policy specified in the Web server
The following sample
robots.txt file specifies that no robots visit any URL starting with
# robots.txt for http://www.example.com/ User-agent: * Disallow: /cyberworld/map/ Disallow: /tmp/ Disallow: /foo.html
If the Web site is under your control, then you can tailor a specific robots rule for the crawler by specifying Oracle Secure Enterprise Search as the user agent. For example:
User-agent: Oracle Secure Enterprise Search Disallow: /tmp/
meta tag can instruct the crawler either to index a Web page or to follow the links within it. For example:
<meta name="robots" content="noindex,nofollow">
Oracle SES always removes duplicate (identical) documents. Oracle SES does not index a page that is identical to one it has already indexed. Oracle SES also does not index a page that it reached through a URL that it has already processed.
With the Web Services API, you can enable or disable near duplicate detection and removal from the result list. Near duplicate documents are similar to each other. They may or may not be identical to each other.
Check for inclusion rules from redirects. The inclusion rules are based on the type of redirect. The
EQ_TEST.EQ$URL table stores all of the URLs that have been crawled or are scheduled to be crawled. There are three kinds of redirects defined in it:
Temporary Redirect: A redirected URL is always allowed if it is a temporary redirection (HTTP status code 302, 307). Temporary redirection is used for whatever reason that the original URL should still be used in the future. It's not possible to find out temporary redirect from EQ$URL table other than filtering out the rest from the log file.
Permanent Redirect: For permanent redirection (HTTP status 301), the redirected URL is subject to boundary rules. Permanent redirection means the original URL is no longer valid and the user should start using the new (redirected) one. In EQ$URL, HTTP permanent redirect has the status code 954
Meta Redirect: Metatag redirection is treated as a permanent redirect. Meta redirect has status code 954. This is always checked against boundary rules.
STATUS column of
EQ_TEST.EQ$URL lists the status codes. For descriptions of the codes, refer to Appendix B, "URL Crawler Status Codes."
Note:Some browsers, such as Mozilla and Firefox, do not allow redirecting a page to load a network file. Microsoft Internet Explorer does not have this limitation.
URL looping refers to the scenario where a large number of unique URLs all point to the same document. Looping sometimes occurs where a site contains a large number of pages, and each page contains links to every other page in the site. Ordinarily this is not a problem, because the crawler eventually analyzes all documents in the site. However, some Web servers attach parameters to generated URLs to track information across requests. Such Web servers might generate a large number of unique URLs that all point to the same document.
might refer to the same document as
p_origin_page parameter is different for each link, because the referring pages are different. If a large number of parameters are specified and if the number of referring links is large, then a single unique document could have thousands or tens of thousands of links referring to it. This is an example of how URL looping can occur.
Monitor the crawler statistics in the Oracle SES Administration GUI to determine which URLs and Web servers are being crawled the most. If you observe an inordinately large number of URL accesses to a particular site or URL, then you might want to do one of the following:
Exclude the Web server: This prevents the crawler from crawling any URLs at that host. (You cannot limit the exclusion to a specific port on a host.)
Reduce the crawling depth: This limits the number of levels of referred links the crawler follows. If you are observing URL looping effects on a particular host, then you should take a visual survey of the site to find out an estimate of the depth of the leaf pages at that site. Leaf pages are pages that do not have any links to other pages. As a general guideline, add three to the leaf page depth, and set the crawling depth to this value.
Be sure to restart the crawler after altering any parameters. Your changes take effect only after restarting the crawler.
Oracle SES allocates 200M for the redo log during installation. 200M is sufficient to crawl a relatively large number of documents. However, if your disk has sufficient space to increase the redo log and if you are going to crawl a very large number of documents (for example, more than 300G of text), then increase the redo log file size for better crawl performance.
Note:The biggest transaction during crawling is
INDEXby Oracle Text. Check the AWR report or the V$SYSSTAT view to see the actual redo size during crawling. Roughly, 200M is sufficient to crawl up to 300G.
To increase the size of the redo log files:
Open SQL*Plus and connect as the SYSTEM user. It has the same password as SEARCHSYS.
Issue the following SQL statement to see the current redo log status:
SELECT vl.group#, member, bytes, vl.status FROM v$log vl, v$logfile vlf WHERE vl.group#=vlf.group#; GROUP# MEMBER BYTES STATUS ------ ------------------------------------------------- ---------- ---------- 3 /var/opt/oracle/data/redo03.log 209715200 INACTIVE 2 /var/opt/oracle/data/redo02.log 209715200 CURRENT 1 /var/opt/oracle/data/redo01.log 209715200 INACTIVE
Drop the INACTIVE redo log file. For example, to drop group 3:
ALTER DATABASE DROP LOGFILE group 3; Database altered.
Create a larger redo log file with a command like the following. If you want to change the file location, specify the new location.
ALTER DATABASE ADD LOGFILE '/scratch/ses111/oradata/o11101/redo03.log' 2 size 400M reuse;
Check the status to ensure that the file was created.
SELECT vl.group#, member, bytes, vl.status FROM v$log vl, v$logfile vlf WHERE vl.group#=vlf.group#; GROUP# MEMBER BYTES STATUS ------ -------------------------------------------------- ---------- ---------- 3 /var/opt/oracle/data/redo03.log 419430400 UNUSED 2 /var/opt/oracle/data/redo02.log 209715200 CURRENT 1 /var/opt/oracle/data/redo01.log 209715200 INACTIVE
To drop a log file with a CURRENT status, issue the following ALTER statement, then check the results.
ALTER SYSTEM SWITCH LOGFILE; SELECT vl.group#, member, bytes, vl.status FROM v$log vl, v$logfile vlf WHERE vl.group#=vlf.group#; GROUP# MEMBER BYTES STATUS ------ -------------------------------------------------- ---------- ---------- 3 /var/opt/oracle/data/redo03.log 419430400 CURRENT 2 /var/opt/oracle/data/redo02.log 209715200 ACTIVE 1 /var/opt/oracle/data/redo01.log 209715200 INACTIVE
Issue the following SQL statement to change the status of Group 2 from ACTIVE to INACTIVE:
ALTER SYSTEM CHECKPOINT; SELECT vl.group#, member, bytes, vl.status FROM v$log vl, v$logfile vlf WHERE vl.group#=vlf.group#; GROUP# MEMBER BYTES STATUS ------ -------------------------------------------------- ---------- ---------- 3 /var/opt/oracle/data/redo03.log 419430400 CURRENT 2 /var/opt/oracle/data/redo02.log 209715200 INACTIVE 1 /var/opt/oracle/data/redo01.log 209715200 INACTIVE
Repeat steps 3, 4 and 5 for redo log groups 1 and 2.
If you are still not crawling all the pages you think you should, then check which pages were crawled by doing one of the following:
To check the crawler log file:
On the Home page, click the Schedules secondary tab to display the Crawler Schedules page.
Click the Log File icon to display the log file for the source.
To obtain the location of the full log, click the Status link. The Crawler Progress Summary and Log Files by Source section displays the full path to the log file.
To create a search source group:
On the Search page, click the Source Groups subtab.
Click New to display Create New Source Group Step 1.
Enter a name, then click Proceed to Step 2.
Select a source type, then shuttle only one source from Available Sources to Assigned Sources.
To search the source group:
On any page, click the Search link in the top right corner to open the Search application.
Select the group name, then issue a search term to list the matches within the source.
Select the group name, then click Browse to see a list of search groups:
The number after the group name identifies the number of browsed documents. Click the number to browse the search results.
Click the arrow before the group name to display a hierarchy of search results. The number of matches appears after each item in the hierarchy.