Vector Monitoring
Detecting Drift in Multi-Dimensional ML and GenAI Model Data
Many modern machine learning systems use input features that cannot be represented as a single number, such as text or image data. These complex features are typically represented by high-dimensional vectors obtained through vectorization methods like text embeddings generated by NLP models. Fiddler users often need to monitor groups of univariate features together and detect data drift in multidimensional feature spaces.
To address these needs, Fiddler provides vector monitoring capabilities that enable you to define custom features and use advanced methods for monitoring data drift in multidimensional spaces.
You can define custom features by grouping columns together in baseline and inference data. For NLP or image data, you can define custom features using columns that contain embedding vectors.
Define Custom Features
Users can use the Fiddler client to define one or more custom features. Custom features can be specified by:
Use the Fiddler client to define one or more custom features. You can specify custom features in three ways:
Group dataset columns that need to be monitored together as a vector (custom_feature_1, custom_feature_2)
Use existing embedding vectors with the source column (custom_feature_3, custom_feature_4)
Define an enrichment that instructs Fiddler to generate embedding vectors automatically on ingestion (custom_feature_5)
After you define and pass a list of custom features to Fiddler, Fiddler runs a clustering-based data drift detection algorithm for each custom feature. The system calculates a corresponding drift value between the baseline and published events for the selected time period.
from fiddler import CustomFeature, TextEmbedding, ImageEmbedding
# Group columns into vectors
custom_feature_1 = CustomFeature.from_columns(
['f1', 'f2', 'f3'], custom_name='vector1'
)
custom_feature_2 = CustomFeature.from_columns(
['f1', 'f2', 'f3'], n_clusters=5, custom_name='vector2'
)
# Use existing embeddings
custom_feature_3 = TextEmbedding(
name='Document Text Embedding', column='text_embedding_col', source_column='text'
)
custom_feature_4 = ImageEmbedding(
name='Image Embedding', column='image_embedding_col', source_column='image_url'
)
# Define automated text embedding enrichment
custom_feature_5 = TextEmbedding(
name='Document Text Embedding',
source_column='doc_col',
column='Enrichment Unstructured Embedding',
n_tags=10,
)
Passing Custom Features List to ModelSpec
After you define custom features for vector monitoring, add them to the ModelSpec
and onboard the Model to Fiddler.
from fiddler import ModelSpec, Model, Project
model_spec = ModelSpec(
inputs=[
'creditscore',
'geography',
'gender',
'age',
'tenure',
'balance',
'numofproducts',
'hascrcard',
'isactivemember',
'estimatedsalary',
'doc_col',
],
outputs=['predicted_churn'],
targets=['churn'],
# Note: Embedding columns you pass in must be included with the metadata columns.
metadata=['customer_id', 'timestamp', 'text_embedding_col', 'image_embedding_col'],
custom_features=[
custom_feature_1,
custom_feature_2,
custom_feature_3,
custom_feature_4,
custom_feature_5,
],
)
model = Model.from_data(
name='your_model_name',
project_id=Project.from_name('your_project_name').id,
source=sample_df,
spec=model_spec,
task=model_task,
task_params=task_params,
event_id_col=id_column,
event_ts_col=timestamp_column,
)
model.create()
Understanding Drift Detection Algorithm
Fiddler's vector monitoring uses a clustering-based approach to detect drift in multidimensional spaces:
Baseline clustering: The system analyzes your baseline data to identify natural clusters using k-means clustering
Production comparison: New production data is compared against these established clusters
Drift calculation: The system calculates drift scores based on changes in cluster distributions and centroid distances
Performance considerations
Computational cost: Vector monitoring requires more computational resources than univariate monitoring
Memory usage: High-dimensional vectors and clustering algorithms increase memory requirements
Processing time: Drift calculations may take longer for large datasets or high-dimensional vectors
Best practices
Choose appropriate cluster numbers: Start with 3-8 clusters and adjust based on your data's natural groupings
Monitor cluster stability: Regularly review cluster formations to ensure they remain meaningful
Set reasonable thresholds: Establish drift thresholds that balance sensitivity with false positive rates
Risk Considerations for AI/ML Applications
When implementing vector monitoring, consider these potential risks:
Bias amplification: Clustering algorithms may amplify existing biases in your training data
Concept drift detection: Traditional clustering may miss subtle concept drift that affects model performance
Interpretability challenges: High-dimensional clusters can be difficult to interpret and explain to stakeholders
Note: For a complete example of NLP monitoring, see our NLP monitoring quick start guide, which demonstrates embedding generation for unstructured inputs.
Related topics
📘 Quick Start for NLP Monitoring
Check out our Quick Start guide for NLP monitoring for a fully functional notebook example where we instruct Fiddler to generate the embeddings for unstructured inputs.
❓ Questions? Talk to a product expert or request a demo.
💡 Need help? Contact us at [email protected].
Last updated
Was this helpful?