• Anti Hallucination 📈 - Libraria’s ANTI-HALLUCINATION query algorithm prevents hallucinations and finds the most relevant information on your data
  • Rich Metadata 🖼️ - Libraria’s API returns rich metadata, including the source, type, images, documents, og tags, and more.


Available Metadata includes the following

  • Images
    • Any photo snippets from all the documents used will be added to this image array
  • Documents
    • Documents is an unfiltered list of the documents used from your knowledge base to answer the user’s question. You can see the similarity threshold, the content, and more.
  • Help Articles
    • Help articles contain a link and a url and has been filtered for the articles that would actually be useful and relevant for the user’s question

Anti Haullucination

In order to prevent your Libraria AI Chatbot from going rogue and answering questions incorrectly, it is important to

What Libraria Does

  • Libraria’s backend has an agent that sends multiple requests to our LLM in order to ensure that what you get is accurate and based on the documents given

What you can do if it is still experiencing hallucinations

  1. Create custom answers. Creating a custom answer is the best thing you can do for your AI chatbot to learn exactly how you want to say it
  2. Modify your Pre-Conversation. Modifying your pre-conversation to add specific rules, will help your chatbot to learn exactly how you want to phrase things. [TODO: add common examples]
  3. Turn on “Only use my documents”. Libraria gives you the option to use our default prompt for preventing your AI from using documents that don’t match.
  4. Turn on “Return Documents Directly”. Bypass ChatGpt directly and return the custom answer or document verbatim instead.