notebookFaithfulness

This Quick Start notebook introduces Fiddler Guardrails, an enterprise solution that safeguards LLM applications from risks like hallucinations, toxicity, and jailbreaking attempts. Learn how to implement the Faithfulness Model, which evaluates factual consistency between AI-generated responses and their source context in RAG applications.

Inside you'll find:

  • Step-by-step implementation instructions

  • Code examples for evaluating response accuracy

  • Practical demonstration of hallucination detection

  • Sample inputs and outputs with score interpretation

Before running the notebook, get your API key from the sign-up page below. See the documentation and FAQs for more help with getting started.

Get Started with Your Free Guardrails →arrow-up-right

Click this link to get started using Google Colab →arrow-up-right

Google Colab

Or download the notebook directly from GitHubarrow-up-right.

Last updated

Was this helpful?