The Web Crawler handles full crawls as follows:
The crawler creates the crawl history database. If a previous database exists, it is overwritten.
From the seed, the crawler generates a list of URLs to be visited and queues them in the database. Each URL is given a status of pending because it has not yet been visited.
The crawler gets a URL from the queue, visits (and processes) the page, and changes the URL's status in the database to complete.
The crawler repeats step 4 until all the queued URLs are processed.