How can I best reduce the risk of my agent showing hallucinations?

Large language models (LLMs) can unintentionally introduce extra details when they lack sufficient information.

To mitigate this issue, consider these strategies:
  • Supply the necessary knowledge: Integrate relevant sources such as business objects, REST APIs, or RAG documents, ensuring the LLM has access to accurate and up-to-date information.
  • Use explicit prompt instructions: Clearly instruct the LLM not to add or infer information beyond what's provided, reducing the risk of embellishment or fabrication.