Long Range Context
Think of questions that require the AI Agent to piece together details found in distant parts of the document.
For instance, Benefits policy documents often contain sections for 'covered' and 'excluded' benefits. Imagine the user asks this question:
Question
“Are supplements and electrolytes covered?” |
Answer
“No, high protein nutrition supplements are not covered. According to the context provided, nutritional or cosmetic therapy using high dose or mega quantities of vitamins, minerals, or elements, and other nutrition-based therapy, including high protein foods, are excluded from coverage. This exclusion applies to supplements, electrolytes, and foods of any kind, including high protein foods and low carbohydrate foods.” |
Excerpts from the Source Document:
Page 80 introduces the plan exclusions section:
Plan Exclusions – What Is Not Covered The Plan Exclusions are categorized and listed within key categories. These Exclusions apply to all Covered Persons. The Plan will not pay or approve benefits for any of the services, treatments, supplies, or other items described in this section, even if one or both of the following is true:
|
Page 81 starts the list of exclusions:
Alternative Treatments
|
Page 86 lists nutrition exclusions:
Nutrition
|
To generate the expected answer, the Oracle Fusion AI Agent needs to correlate information present on page 81 (where the list of exclusions starts) with the information on page 86 (which covers supplements). If the Agent skips the information on page 81, it may erroneously conclude that supplements are covered.