Guardrails - Safety
Last updated
Was this helpful?
Last updated
Was this helpful?
This Quick Start Notebook introduces Fiddler Guardrails' Safety Detection capabilities, an essential component of our enterprise solution for protecting LLM applications. Learn how to implement the Safety Model, which identifies harmful, sensitive, or jailbreaking content in both inputs and outputs of your generative AI systems.
Inside you'll find:
Step-by-step implementation instructions
Code examples for safety evaluation
Practical demonstration of harmful content detection
Sample inputs and outputs with risk score interpretation
Before running the notebook, get your API key from the sign-up page below. See the documentation and FAQs for more help with getting started.
Click
Or download the notebook directly from .