- Inventing medication dosages or drug interactions that aren’t documented
- Creating fake statistics, policy terms, or research studies
- Confidently stating incorrect coverage limits or eligibility criteria
- Making up regulatory requirements or compliance procedures
- Inventing function names in code that don’t exist
Why This Matters
Whether you’re a healthcare provider checking treatment protocols, a financial advisor verifying compliance requirements, an insurance agent confirming policy details, or a developer building software, you need accurate information. A hallucinated answer isn’t just unhelpful. It can lead to compliance violations, incorrect patient care, financial losses, or costly errors.Gurubase’s Approach: Seven Layers of Protection
Gurubase is hallucination-resistant AI platform. LLMs have an inherent tendency to hallucinate, and no system can completely eliminate this. However, Gurubase significantly reduces the risk through a “Trust, Then Verify” approach with multiple layers of verification. Think of it like quality control in manufacturing: each layer catches different types of problems, and the system will refuse to answer when it cannot provide reliable information.Layer 1: Smart Retrieval - Finding the Right Information
What happens: When you ask a question, Gurubase searches through your documentation, knowledge bases, policy documents, and other sources using multiple strategies simultaneously. How it works:- Your question is analyzed and rephrased in multiple ways
- Each version searches for different relevant information
- Results from different sources are combined and duplicates removed
Layer 2: Relevance Scoring - Rating Each Source
What happens: Before using any information to answer your question, Gurubase rates how relevant each piece is on a scale from 0 to 100%. How it works:- Each found document is evaluated by an AI specifically trained to judge relevance
- Documents get scores: 0% (irrelevant) to 100% (perfect match)
- The AI explains WHY it gave each score
Layer 3: Trust Score - Overall Answer Quality
What happens: Gurubase calculates a “Trust Score” for your answer based on the quality of all sources used. How it works:- Combines all individual source scores into one overall score
- Uses a configurable threshold to decide if the answer is trustworthy enough
- Factors in how many sources were found and how well they agree
Layer 4: Explicit Failure - Saying “I Don’t Know”
What happens: If Gurubase can’t find enough relevant information, it refuses to answer instead of making something up. How it works:- If no sources score high enough, the system stops
- Instead of generating an answer, it tells you honestly: “I don’t have enough information”
- These “out of context” questions are logged to help improve the knowledge base
Layer 5: Guided Answer Generation - Teaching the AI to Stay Grounded
What happens: When generating an answer, Gurubase gives the AI strict instructions to only use the approved sources. How it works:- The AI is explicitly told: “Use ONLY information from the provided sources”
- It’s instructed to cite sources and admit limitations
- Special rules prioritize manually-edited correct answers over generated ones
Layer 6: Manual Override - Human Expertise Wins
What happens: If experts have manually written or corrected an answer, that version always takes priority over AI-generated content. How it works:- Edited answers are marked as “highest priority”
- When generating an answer, manually corrected information overrides everything else
- Other sources can supplement but never contradict the edited version
Layer 7: Complete Audit Trail - Transparency in Every Answer
What happens: Gurubase logs every decision it makes: which sources were used, which were rejected, and why. How it works:- Every source’s relevance score is recorded
- Sources used in the answer are shown to you
- You can see exactly what information the answer is based on
- Trust scores are displayed with color coding
Trust Score: Your Quality Indicator
Every answer gets a visual trust indicator so you know how confident Gurubase is:| Score Range | What It Means |
|---|---|
| 90-100% | Excellent - Multiple high-quality sources |
| 80-89% | Very Good - Strong source support |
| 70-79% | Good - Solid information available |
| 60-69% | Acceptable - Use with awareness |
| 50-59% | Caution - Limited information |
| Below 50% | Low Confidence - Verify carefully |
What Makes This Different?
Traditional AI Chatbots:
Gurubase Approach:
- ✅ Based on actual documentation, not assumptions
- ✅ Explicitly states what isn’t found
- ✅ Provides actionable next steps
- ✅ Shows sources for verification
- ✅ High trust score because information is clear
When Gurubase Says “I Don’t Know”
This is a feature, not a bug. Here are scenarios where Gurubase will refuse to answer:Example 1: Out of Scope
Example 2: Insufficient Information
Example 3: Conflicting Information
Continuous Improvement: The Feedback Loop
Gurubase learns from its limitations:Tracking Unanswerable Questions
Every time Gurubase can’t answer a question, it records:- What was asked
- Why it couldn’t answer
- What information was missing
Using This Data
- Identify documentation gaps
- Prioritize new content creation
- Understand what users actually need
- Improve knowledge base coverage
Best Practices for Users
1. Check the Trust Score
- 80%+: Highly reliable, well-documented
- 60-79%: Good information, verify if critical
- 50-59%: Limited info, double-check
- Below 50% or Rejected: Find alternative sources
2. Review the Sources
Always available below each answer. Click to verify:- Is this from official documentation?
- Is the information recent?
- Does it match your use case?
3. Ask Follow-up Questions
If something is unclear or the trust score is low:4. Report Issues
If an answer seems wrong despite high trust score:- Use the feedback buttons
- Helps improve the system
- Benefits all users
FAQ
”What is Gurubase Trust Score?”
The Trust Score is a percentage (0-100%) that indicates how confident Gurubase is in its answer based on the quality and relevance of the sources used. A higher score means the answer is well-supported by multiple high-quality sources from your knowledge base. Scores above 80% indicate highly reliable answers, while lower scores suggest limited information is available. The threshold is configurable per Guru: you can set it higher (e.g., 70%) for high-stakes domains like finance or healthcare, or lower (e.g., 50%) for general education content. If the Trust Score falls below your configured threshold, Gurubase will refuse to answer rather than provide unreliable information.”Why not just let the AI be creative and fill in gaps?”
In regulated industries and critical decisions, creativity is dangerous. You need facts. A creative answer might:- State incorrect coverage amounts → Members make wrong financial decisions
- Invent compliance requirements → Organizations face regulatory penalties
- Suggest non-existent procedures → Patient care is compromised
- Make up policy terms → Claims are processed incorrectly
”What if I need an answer that’s not in the documentation?”
Gurubase identifies this explicitly. You’ll know:- Exactly what information exists in your knowledge base
- What’s missing or unclear
- Where to look next (member services, compliance team, etc.)
”Isn’t refusing to answer sometimes frustrating?”
Short-term frustration beats long-term problems:- Giving a member incorrect coverage information that leads to unexpected bills
- Making compliance decisions based on hallucinated regulations
- Training staff with incorrect procedures
”How is this different from just searching documentation manually?”
Gurubase adds value through:- Semantic understanding: Finds relevant info even with different wording
- Synthesis: Combines information from multiple sources
- Quality scoring: Tells you how confident to be
- Context: Understands follow-up questions in conversation
- Speed: Instant answers vs. manual searching and reading
Summary
Gurubase is hallucination-resistant, significantly reducing the risk of incorrect information through seven layers of quality checks. While no AI system can completely eliminate hallucinations (they are inherent to how LLMs work), Gurubase would rather say “I don’t know” than guess.The techniques described here represent just the core layers of our hallucination prevention system. Gurubase employs additional proprietary methods and safeguards that we continuously refine and improve based on real-world usage and the latest AI research.
- It’s based on real sources from your documentation
- You can verify every claim with provided links
- The trust score tells you how confident to be
- Any limitations or gaps are explicitly stated