Guardrails - Faithfulness
This Quick Start notebook introduces Fiddler Guardrails, an enterprise solution that safeguards LLM applications from risks like hallucinations, toxicity, and jailbreaking attempts. Learn how to implement the Faithfulness Model, which evaluates factual consistency between AI-generated responses and their source context in RAG applications.
Inside you'll find:
Step-by-step implementation instructions
Code examples for evaluating response accuracy
Practical demonstration of hallucination detection
Sample inputs and outputs with score interpretation
Before running the notebook, get your API key from the sign-up page below. See the documentation and FAQs for more help with getting started.
Get Started with Your Free Guardrails β
Click this link to get started using Google Colab β

Or download the notebook directly from GitHub.
Last updated
Was this helpful?