Guardrails - Safety
Last updated
Was this helpful?
Last updated
Was this helpful?
This Quick Start Notebook introduces Fiddler Guardrails' Safety Detection capabilities, an essential component of our enterprise solution for protecting LLM applications. Learn how to implement the Safety Model, which identifies harmful, sensitive, or jailbreaking content in both inputs and outputs of your generative AI systems.
Inside you'll find:
Step-by-step implementation instructions
Code examples for safety evaluation
Practical demonstration of harmful content detection
Sample inputs and outputs with risk score interpretation
The notebook requires a free trial API key to run, which you can obtain through the sign-up link below. Additional resources, including comprehensive documentation and FAQs, are also provided to help you implement robust safety guardrails for your LLM applications.
Click this link to get started using Google Colab →
Or download the notebook directly from GitHub.