See client.publish_events_batch() for detailed information on function usage.
Fiddler has a flexible ETL framework for retrieving and publishing batches of production data, either from local storage or from the cloud. This provides maximum flexibility in how you are required to store your data when publishing events to Fiddler.
The following data formats are currently supported:
- pandas DataFrame objects (
- CSV files (
- Parquet files (
- Avro files (
- Pickled pandas DataFrame objects (
- gzipped CSV files (
The following data locations are supported:
- In memory (for DataFrames)
- Local disk
- AWS S3
- GCP Cloud Storage
Once you have a batch of events stored somewhere, all you need to do to publish the batch to Fiddler is call the Fiddler client's
client.publish_events_batch( project_id=PROJECT_ID, model_id=MODEL_ID, batch_source="my_batch.csv" )
After calling the function, please allow 3-5 minutes for events to populate the Monitor page.
Updated 27 days ago