Guardrails - Faithfulness
Last updated
Was this helpful?
Last updated
Was this helpful?
This Quick Start notebook introduces Fiddler Guardrails, an enterprise solution that safeguards LLM applications from risks like hallucinations, toxicity, and jailbreaking attempts. Learn how to implement the Faithfulness Model, which evaluates factual consistency between AI-generated responses and their source context in RAG applications.
Inside you'll find:
Step-by-step implementation instructions
Code examples for evaluating response accuracy
Practical demonstration of hallucination detection
Sample inputs and outputs with score interpretation
The notebook requires a free trial API key to run, which you can obtain through the sign-up link below. Additional resources, including comprehensive documentation and FAQs, are also provided to help you get started.
Click this link to get started using Google Colab →
Or download the notebook directly from GitHub.