Siebel Smart Answer Guide > Using the Siebel Smart Answer Administration Tool > Creating a Knowledge Base Model Using the Benchmark and Analyze Work Flow >

Repeating the Benchmarking Process


In order to validate that the changes made to the corpus during the Analyze and Tune process were productive, a new benchmark must be run to measure your modifications to the corpus and/or category set.

To repeat the benchmark process

  1. Navigate to the Knowledge Base Benchmark Tab and select Run Benchmark.

    For more information on the benchmarking process, see Benchmarking the Knowledge Base Model.

  2. Examine the results as described above and repeat the Knowledge Base Benchmark, and Analyze workflow until the scores for each category are at least 50% or higher.

    NOTE:  Keep in mind that the feedback mechanism will quickly tune the knowledge base model to learn the operating environment and that the initial knowledge base model is only intended as a starting point from which Siebel Smart Answer will learn and refine the categorization models inside the knowledge base model. So if you can not reach the 50% accuracy mark, you can rely on the feedback mechanisms to do the correction in the production environment.

  3. Whenever any changes are made to the categories or the corpus inside the Siebel Smart Answer Administration Tool, always build the knowledge base model in order for the changes to take affect in the Siebel Smart Answer run-time system.

    For more information on how to build a Siebel Smart Answer knowledge base model, see Building a New Knowledge Base Model.

Siebel Smart Answer Guide Copyright © 2015, Oracle and/or its affiliates. All rights reserved. Legal Notices.