The fetcher compares the value of the fetcher.delay.max property to the value of the Crawl-Delay parameter in the robots.txt file.
The fetcher works as follows:
The delay specified in robots.txt is greater than the max delay. Therefore the crawler will not fully crawl this site. All pending work from this host has been removed.
Note that above behavior occurs only if the http.robots.ignore property is set to false (which is the default).