LogoLogo
👨‍💻 API Reference📣 Release Notes📺 Request a Demo
  • Introduction to Fiddler
    • Monitor, Analyze, and Protect your ML Models and Gen AI Applications
  • Fiddler Doc Chatbot
  • First Steps
    • Getting Started With Fiddler Guardrails
    • Getting Started with LLM Monitoring
    • Getting Started with ML Model Observability
  • Tutorials & Quick Starts
    • LLM and GenAI
      • LLM Evaluation - Compare Outputs
      • LLM Monitoring - Simple
    • Fiddler Free Guardrails
      • Guardrails - Quick Start Guide
      • Guardrails - Faithfulness
      • Guardrails - Safety
      • Guardrails FAQ
    • ML Observability
      • ML Monitoring - Simple
      • ML Monitoring - NLP Inputs
      • ML Monitoring - Class Imbalance
      • ML Monitoring - Model Versions
      • ML Monitoring - Ranking
      • ML Monitoring - Regression
      • ML Monitoring - Feature Impact
      • ML Monitoring - CV Inputs
  • Glossary
    • Product Concepts
      • Baseline
      • Custom Metric
      • Data Drift
      • Embedding Visualization
      • Fiddler Guardrails
      • Fiddler Trust Service
      • LLM and GenAI Observability
      • Metric
      • Model Drift
      • Model Performance
      • ML Observability
      • Trust Score
  • Product Guide
    • LLM Application Monitoring & Protection
      • LLM-Based Metrics
      • Embedding Visualizations for LLM Monitoring and Analysis
      • Selecting Enrichments
      • Enrichments (Private Preview)
      • Guardrails for Proactive Application Protection
    • Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring
      • Alerts
      • Package-Based Alerts (Private Preview)
      • Class Imbalanced Data
      • Enhance ML and LLM Insights with Custom Metrics
      • Data Drift: Monitor Model Performance Changes with Fiddler's Insights
      • Ensuring Data Integrity in ML Models And LLMs
      • Embedding Visualization With UMAP
      • Fiddler Query Language
      • Model Versions
      • How to Effectively Use the Monitoring Chart UI
      • Performance Tracking
      • Model Segments: Analyze Cohorts for Performance Insights and Bias Detection
      • Statistics
      • Monitoring ML Model and LLM Traffic
      • Vector Monitoring
    • Enhance Model Insights with Fiddler's Slice and Explain
      • Events Table in RCA
      • Feature Analytics Creation
      • Metric Card Creation
      • Performance Charts Creation
      • Performance Charts Visualization
    • Master AI Monitoring: Create, Customize, and Compare Dashboards
      • Creating Dashboards
      • Dashboard Interactions
      • Dashboard Utilities
    • Adding and Editing Models in the UI
      • Model Editor UI
      • Model Schema Editing Guide
    • Fairness
    • Explainability
      • Model: Artifacts, Package, Surrogate
      • Global Explainability: Visualize Feature Impact and Importance in Fiddler
      • Point Explainability
      • Flexible Model Deployment
        • On Prem Manual Flexible Model Deployment XAI
  • Technical Reference
    • Python Client API Reference
    • Python Client Guides
      • Installation and Setup
      • Model Onboarding
        • Create a Project and Onboard a Model for Observation
        • Model Task Types
        • Customizing your Model Schema
        • Specifying Custom Missing Value Representations
      • Publishing Inference Data
        • Creating a Baseline Dataset
        • Publishing Batches Of Events
        • Publishing Ranking Events
        • Streaming Live Events
        • Updating Already Published Events
        • Deleting Events From Fiddler
      • Creating and Managing Alerts
      • Explainability Examples
        • Adding a Surrogate Model
        • Uploading Model Artifacts
        • Updating Model Artifacts
        • ML Framework Examples
          • Scikit Learn
          • Tensorflow HDF5
          • Tensorflow Savedmodel
          • Xgboost
        • Model Task Examples
          • Binary Classification
          • Multiclass Classification
          • Regression
          • Uploading A Ranking Model Artifact
    • Integrations
      • Data Pipeline Integrations
        • Airflow Integration
        • BigQuery Integration
        • Integration With S3
        • Kafka Integration
        • Sagemaker Integration
        • Snowflake Integration
      • ML Platform Integrations
        • Integrate Fiddler with Databricks for Model Monitoring and Explainability
        • Datadog Integration
        • ML Flow Integration
      • Alerting Integrations
        • PagerDuty Integration
    • Comprehensive REST API Reference
      • Projects REST API Guide
      • Model REST API Guide
      • File Upload REST API Guide
      • Custom Metrics REST API Guide
      • Segments REST API Guide
      • Baselines REST API Guide
      • Jobs REST API Guide
      • Alert Rules REST API Guide
      • Environments REST API Guide
      • Explainability REST API Guide
      • Server Info REST API Guide
      • Events REST API Guide
      • Fiddler Trust Service REST API Guide
    • Fiddler Free Guardrails Documentation
  • Configuration Guide
    • Authentication & Authorization
      • Adding Users
      • Overview of Role-Based Access Control
      • Email Authentication
      • Okta Integration
      • SSO with Azure AD
      • Ping Identity SAML SSO Integration
      • Mapping LDAP Groups & Users to Fiddler Teams
    • Application Settings
    • Supported Browsers
  • History
    • Release Notes
    • Python Client History
    • Compatibility Matrix
    • Product Maturity Definitions
Powered by GitBook

© 2024 Fiddler Labs, Inc.

On this page
  • Publish Inference Events to Fiddler
  • Publish Pre-Production Data
  • Publish Production Data
  • Data Retention Policy

Was this helpful?

  1. Technical Reference
  2. Python Client Guides

Publishing Inference Data

PreviousSpecifying Custom Missing Value RepresentationsNextCreating a Baseline Dataset

Last updated 1 month ago

Was this helpful?

Publish Inference Events to Fiddler

After you onboard an ML model or LLM application as a Fiddler , you can publish inference events for analysis, performance monitoring, and reporting. There are two types of inference data:

  • Pre-production data: Static datasets such as training or testing data that serve as references for comparison

  • Production data: Time series data from live model inferences that Fiddler monitors against your baselines

Integration Methods

Fiddler offers two ways to publish inference data:

Python Client Library

Use the Python client for Python environments. Publish both production and pre-production inference data with the method.

For more details, see the documentation.

REST API

Use the for language-agnostic integration across any platform. Both production and pre-production inference data use a common interface.

For more details, see the .

Publish Pre-Production Data

Publish pre-production data to Fiddler as a single dataset. You can add multiple baseline datasets to a model to create customized references for different metrics and alert rules.

Fiddler accepts pre-production data in these formats:

  • Pandas DataFrame

  • Parquet file

  • CSV file

Note:

Pre-production datasets are immutable after publication. You can't update them or delete individual rows.

Upload a Static Pre-Production Baseline

dataset_file_path = 'path_to_your_data.parquet'
dataset_name = 'a_unique_identifying_name'

project = fdl.Project.from_name(name='your_project_name')
model = fdl.Model.from_name(name='your_model_name', project_id=project.id)

job = model.publish(
    source=dataset_file_path,
    environment=fdl.EnvType.PRE_PRODUCTION,
    dataset_name=dataset_name,
)
# The publish() method is asynchronous. Use the publish job's wait() method 
# if synchronous behavior is desired.
# job.wait() 

Publish Production Data

Fiddler provides several methods for publishing and editing production inference data:

  • Batch publishing: Send data in batches using pandas DataFrames, Parquet files, or CSV files

  • Stream publishing: Send individual events or small batches in near real-time

  • Update publishing: Modify previously published data

  • Delete publishing: Remove published data when needed

Fiddler accepts production data in these formats:

  • Pandas DataFrames

  • Parquet files

  • CSV files

  • List of Python dictionaries (limited to stream and updates)

A list of dictionaries is an additional data format on top of the three common to pre-production and production data.

Choose the method that best fits your use case by reviewing the publishing guides below:

Key Considerations

Here are some considerations to keep in mind as you onboard models and begin publishing production data to Fiddler.

Inference Event Unique Identifier

Fiddler requires a unique identifier on each event published should you later need to update ground truth labels and/or metadata columns.

  • Define the unique identifier column name when onboarding a model: Model.event_id_col

  • A unique index on the event id column is not enforced

  • As duplicate values are allowed, events sharing the same event id value will all be used in calculating metrics

Inference Event Timestamp

Fiddler requires a timestamp for each inference event which is used as the event occurrence timestamp in time-series monitoring charts and alert rule evaluation.

  • Define the timestamp column name when onboarding a model: Model.event_ts_col

  • If not defined, Fiddler will use the time of publication as the event occurrence timestamp

  • Timestamps are stored and rendered in UTC

  • Timestamps with timezone are accepted but will be converted to UTC

  • Fiddler supports basic pandas timestamp formats by inferring from the data

Data Retention Policy

Fiddler retains production inference event data for 90 days. Contact your Fiddler customer success representative if you need a different retention period.

Raw Event Data

  • Retained for 90 days from publication date

  • Automatically deleted after 90 days

  • Policy applies globally

Pre-Calculated Metrics

  • Standard metrics derived from raw data are retained indefinitely

  • Dashboards and charts continue to display historical trends after raw data expires

Runtime features

  • Custom metrics require raw event data and aren't available for data older than 90 days

For detailed instructions, see the guide.

Creating a Baseline
Publish batch events
Stream live events
Update published events
Delete events
Publish ranking events
Baseline
Python client
REST API
Events REST API Guide

Need help? Contact us at .

💡
help@fiddler.ai
Model
Model.publish()