You do not need to run Forge if you run a CAS crawl that is configured to produce MDEX-compatible output as part of your update process.
This example runs a
baseline update that includes a full CAS crawl. The crawl
writes MDEX compatible output and then the update invokes Dgidx to process the
records, dimensions, and index configuration produced by the crawl. To create
this sequential workflow of CAS crawl and then baseline update, you add a call
to
runBaselineCasCrawl()
to run the CAS crawl.
For example, this baseline update script calls
CAS.runBaselineCasCrawl("${lastMileCrawlName}")
which
runs a CAS crawl that writes MDEX-compatible output. Then the script continues
with baseline update processing by running Dgidx and distributing the index
files.
<!-- ######################################################################## # Baseline update script # --> <script id="BaselineUpdate"> <log-dir>./logs/provisioned_scripts</log-dir> <provisioned-script-command>./control/baseline_update.bat</provisioned-script-command> <bean-shell-script> <![CDATA[ log.info("Starting baseline update script."); // obtain lock if (LockManager.acquireLock("update_lock")) { // clean directories CAS.cleanCumulativePartials(); Dgidx.cleanDirs(); // run crawl and archive any changes in the dvalId mappings CAS.runBaselineCasCrawl("${lastMileCrawlName}"); CAS.archiveDvalIdMappingsForCrawlIfChanged("${lastMileCrawlName}"); // archive logs and run the Indexer Dgidx.archiveLogDir(); Dgidx.run(); // distributed index, update Dgraphs DistributeIndexAndApply.run(); // archive state files, index Dgidx.archiveIndex(); // (start or) cycle the LogServer LogServer.cycle(); // release lock LockManager.releaseLock("update_lock"); log.info("Baseline update script finished."); } else { log.warning("Failed to obtain lock."); } ]]> </bean-shell-script> </script>
You run the baseline update by running
baseline_update
in the
apps/<app dir>/control
directory.
For example:
C:\Endeca\apps\DocApp\control>baseline_update.bat