LogoLogo
👨‍💻 API Reference📣 Release Notes📺 Request a Demo
  • Introduction to Fiddler
    • Monitor, Analyze, and Protect your ML Models and Gen AI Applications
  • Fiddler Doc Chatbot
  • First Steps
    • Getting Started With Fiddler Guardrails
    • Getting Started with LLM Monitoring
    • Getting Started with ML Model Observability
  • Tutorials & Quick Starts
    • LLM and GenAI
      • LLM Evaluation - Compare Outputs
      • LLM Monitoring - Simple
    • Fiddler Free Guardrails
      • Guardrails - Quick Start Guide
      • Guardrails - Faithfulness
      • Guardrails - Safety
      • Guardrails FAQ
    • ML Observability
      • ML Monitoring - Simple
      • ML Monitoring - NLP Inputs
      • ML Monitoring - Class Imbalance
      • ML Monitoring - Model Versions
      • ML Monitoring - Ranking
      • ML Monitoring - Regression
      • ML Monitoring - Feature Impact
      • ML Monitoring - CV Inputs
  • Glossary
    • Product Concepts
      • Baseline
      • Custom Metric
      • Data Drift
      • Embedding Visualization
      • Fiddler Guardrails
      • Fiddler Trust Service
      • LLM and GenAI Observability
      • Metric
      • Model Drift
      • Model Performance
      • ML Observability
      • Trust Score
  • Product Guide
    • LLM Application Monitoring & Protection
      • LLM-Based Metrics
      • Embedding Visualizations for LLM Monitoring and Analysis
      • Selecting Enrichments
      • Enrichments (Private Preview)
      • Guardrails for Proactive Application Protection
    • Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring
      • Alerts
      • Package-Based Alerts (Private Preview)
      • Class Imbalanced Data
      • Enhance ML and LLM Insights with Custom Metrics
      • Data Drift: Monitor Model Performance Changes with Fiddler's Insights
      • Ensuring Data Integrity in ML Models And LLMs
      • Embedding Visualization With UMAP
      • Fiddler Query Language
      • Model Versions
      • How to Effectively Use the Monitoring Chart UI
      • Performance Tracking
      • Model Segments: Analyze Cohorts for Performance Insights and Bias Detection
      • Statistics
      • Monitoring ML Model and LLM Traffic
      • Vector Monitoring
    • Enhance Model Insights with Fiddler's Slice and Explain
      • Events Table in RCA
      • Feature Analytics Creation
      • Metric Card Creation
      • Performance Charts Creation
      • Performance Charts Visualization
    • Master AI Monitoring: Create, Customize, and Compare Dashboards
      • Creating Dashboards
      • Dashboard Interactions
      • Dashboard Utilities
    • Adding and Editing Models in the UI
      • Model Editor UI
      • Model Schema Editing Guide
    • Fairness
    • Explainability
      • Model: Artifacts, Package, Surrogate
      • Global Explainability: Visualize Feature Impact and Importance in Fiddler
      • Point Explainability
      • Flexible Model Deployment
        • On Prem Manual Flexible Model Deployment XAI
  • Technical Reference
    • Python Client API Reference
    • Python Client Guides
      • Installation and Setup
      • Model Onboarding
        • Create a Project and Onboard a Model for Observation
        • Model Task Types
        • Customizing your Model Schema
        • Specifying Custom Missing Value Representations
      • Publishing Inference Data
        • Creating a Baseline Dataset
        • Publishing Batches Of Events
        • Publishing Ranking Events
        • Streaming Live Events
        • Updating Already Published Events
        • Deleting Events From Fiddler
      • Creating and Managing Alerts
      • Explainability Examples
        • Adding a Surrogate Model
        • Uploading Model Artifacts
        • Updating Model Artifacts
        • ML Framework Examples
          • Scikit Learn
          • Tensorflow HDF5
          • Tensorflow Savedmodel
          • Xgboost
        • Model Task Examples
          • Binary Classification
          • Multiclass Classification
          • Regression
          • Uploading A Ranking Model Artifact
    • Integrations
      • Data Pipeline Integrations
        • Airflow Integration
        • BigQuery Integration
        • Integration With S3
        • Kafka Integration
        • Sagemaker Integration
        • Snowflake Integration
      • ML Platform Integrations
        • Integrate Fiddler with Databricks for Model Monitoring and Explainability
        • Datadog Integration
        • ML Flow Integration
      • Alerting Integrations
        • PagerDuty Integration
    • Comprehensive REST API Reference
      • Projects REST API Guide
      • Model REST API Guide
      • File Upload REST API Guide
      • Custom Metrics REST API Guide
      • Segments REST API Guide
      • Baselines REST API Guide
      • Jobs REST API Guide
      • Alert Rules REST API Guide
      • Environments REST API Guide
      • Explainability REST API Guide
      • Server Info REST API Guide
      • Events REST API Guide
      • Fiddler Trust Service REST API Guide
    • Fiddler Free Guardrails Documentation
  • Configuration Guide
    • Authentication & Authorization
      • Adding Users
      • Overview of Role-Based Access Control
      • Email Authentication
      • Okta Integration
      • SSO with Azure AD
      • Ping Identity SAML SSO Integration
      • Mapping LDAP Groups & Users to Fiddler Teams
    • Application Settings
    • Supported Browsers
  • History
    • Release Notes
    • Python Client History
    • Compatibility Matrix
    • Product Maturity Definitions
Powered by GitBook

© 2024 Fiddler Labs, Inc.

On this page
  • How Fiddler Uses Trust Service
  • Why Fiddler Trust Service Is Important
  • Challenges
  • Frequently Asked Questions
  • Related Terms
  • Related Resources

Was this helpful?

  1. Glossary
  2. Product Concepts

Fiddler Trust Service

The Fiddler Trust Service is a specialized infrastructure component of the Fiddler AI platform that hosts purpose-built large language models (LLMs) designed specifically for AI monitoring and guardrail use cases. These dedicated models, known as Fiddler Trust Models (or Fast Trust Models), are optimized to evaluate LLM outputs with significantly higher efficiency than general-purpose LLMs while maintaining comparable quality assessments.

This service provides computational infrastructure that powers both Fiddler's observability features (by generating quality metrics for LLM outputs) and its real-time protection capabilities (through Fiddler Guardrails). By using purpose-built models rather than general-purpose LLMs, the Fiddler Trust Service delivers evaluations with lower latency, reduced costs, and improved reliability compared to using third-party LLM APIs.

The Fiddler Trust Service operates as a managed service within the Fiddler platform ecosystem, handling the secure processing of customer LLM inputs and outputs to generate trust metrics and enforce guardrail policies.

How Fiddler Uses Trust Service

The Fiddler Trust Service serves as the computational backbone for Fiddler's LLM monitoring and guardrail capabilities. It hosts the specialized Fiddler Trust Models that power two primary functions within the platform:

For observability features, the service processes LLM inputs and outputs to generate specialized metrics that evaluate output quality, safety, and alignment. These metrics are then integrated into Fiddler's monitoring dashboards and alerting systems.

For real-time protection through Fiddler Guardrails, the service evaluates potential LLM outputs against customizable safety policies before they reach end users, filtering out problematic content and providing detailed explanation of policy violations.

By maintaining this service as an internal component, Fiddler ensures consistent, reliable performance with optimized costs compared to solutions that rely on external LLM APIs for similar functionality.

Why Fiddler Trust Service Is Important

The Fiddler Trust Service addresses several critical challenges in LLM monitoring and governance. By providing specialized models optimized for evaluation tasks, it enables more efficient, cost-effective, and reliable monitoring than solutions dependent on general-purpose LLMs.

This service is essential for organizations that need to maintain real-time visibility into their LLM applications while ensuring outputs meet safety and quality standards. It enables faster detection of issues, more comprehensive monitoring coverage, and stronger protections against potentially harmful outputs.

As LLM deployments scale across the enterprise, the Trust Service's efficiency becomes increasingly valuable, reducing both operational costs and computational overhead compared to traditional evaluation approaches.

  • Performance Optimization: Fiddler Trust Models are specifically optimized for evaluation tasks, delivering similar quality assessments as general-purpose LLMs but with significantly lower latency and computational requirements.

  • Cost Efficiency: By using purpose-built models rather than larger general-purpose LLMs, the Trust Service reduces the computational resources required for comprehensive LLM monitoring, translating to lower operational costs.

  • Reliability: As a dedicated service maintained by Fiddler, the Trust Service provides more consistent availability and performance than solutions dependent on third-party API calls, which may have rate limits or service disruptions.

  • Comprehensive Coverage: The Trust Service supports both post-deployment monitoring (observability) and pre-deployment protection (guardrails), providing a unified approach to LLM governance throughout the application lifecycle.

  • Specialized Evaluation: Unlike general metrics, the Trust Service provides specialized assessments tailored specifically to LLM outputs, measuring dimensions like hallucination, alignment, toxicity, and quality that are unique to generative AI systems.

  • Scalability: As organizations deploy more LLM applications, the efficiency of the Trust Service enables monitoring at scale without proportional increases in computational overhead or costs.

  • Privacy and Security: By processing evaluations within Fiddler's infrastructure rather than sending data to third-party APIs, the Trust Service helps organizations maintain stronger data privacy and security controls.

Challenges

Effective LLM monitoring and protection present several technical and operational challenges that the Fiddler Trust Service is designed to address.

  • Evaluation Latency: Traditional approaches to LLM evaluation using other LLMs introduce significant latency, which the Trust Service addresses through specialized, efficient models optimized for evaluation tasks.

  • Computational Cost: Evaluating LLM outputs at scale using general-purpose models can be prohibitively expensive, a challenge mitigated by the Trust Service's more efficient purpose-built models.

  • Coverage vs. Performance: Organizations often face tradeoffs between comprehensive monitoring coverage and system performance, which the Trust Service helps balance through optimized evaluation approaches.

  • Evaluation Quality: Simpler metrics may fail to capture nuanced issues in LLM outputs, while the Trust Service provides sophisticated evaluations that maintain high correlation with human judgments.

  • Real-time Protection: Implementing guardrails without introducing significant latency is challenging, addressed by the Trust Service's efficient models and optimized processing pipeline.

  • Customization Needs: Different organizations have varying standards for acceptable content, requiring flexible evaluation systems that can be tailored to specific use cases and policies.

  • Integration Complexity: Adding monitoring to existing LLM deployments can be complex, a challenge the Trust Service addresses through streamlined integration options and APIs.

Frequently Asked Questions

Q: What advantages do Fiddler Trust Models offer over using general-purpose LLMs for evaluation?

Fiddler Trust Models provide similar quality assessments as general-purpose LLMs but with significantly lower latency (typically 10-100x faster), reduced computational requirements, lower costs, and more consistent availability since they don't depend on third-party APIs that may have rate limits or service disruptions.

Q: Can I use the Fiddler Trust Service for both monitoring and real-time protection?

Yes, the Fiddler Trust Service powers both observability features (monitoring metrics) and real-time protection through Fiddler Guardrails. You can implement either or both capabilities depending on your specific needs.

Q: What types of metrics does the Trust Service provide?

The Trust Service generates specialized metrics for LLM outputs including safety evaluations (detecting harmful, unethical, or inappropriate content), faithfulness assessments (measuring hallucination and factual accuracy), and other quality dimensions like coherence, relevance, and alignment with intended use.

Q: How does the Trust Service integrate with my existing LLM applications?

For monitoring, you can publish LLM inputs and outputs to Fiddler's platform either through batch uploads or real-time API calls. For guardrails protection, you integrate the Guardrails API into your application flow, sending potential outputs for evaluation before displaying them to users.

Q: Is the Fiddler Trust Service available as a standalone offering?

The Fiddler Guardrails component of the Trust Service is available as a standalone offering, while the monitoring metrics are integrated into Fiddler's comprehensive observability platform.

Related Terms

Related Resources

PreviousFiddler GuardrailsNextLLM and GenAI Observability

Last updated 14 days ago

Was this helpful?

Trust Score
Enrichments
Guardrails
Embedding Visualization
Data Drift
LLM Monitoring Overview
LLM-based Metrics Guide
Embedding Visualization with UMAP
Selecting Enrichments
Enrichments Documentation
Guardrails for Proactive Application Protection
Fiddler Fast Trust Metrics