Enrichments

Introduction to Enrichments

Enrichments are specialized evaluation features that augment your LLM application data with automatically generated trust and safety metrics. By defining enrichments during model onboarding, you instruct Fiddler to analyze published prompts and responses using purpose-built models that assess dimensions like faithfulness, toxicity, relevance, and safety compliance.

The enrichment framework processes your application's inputs and outputs to generate quantitative scores that integrate directly with Fiddler's monitoring dashboards, alerting systems, and root cause analysis tools. This approach enables proactive detection of model drift, content safety violations, and performance degradation without requiring manual evaluation or external API dependencies.

  • Enrichments are custom features designed to augment the data provided in events

  • Enrichments augment existing columns with new metrics that are defined during model onboarding

  • The new metrics are available for use within the analysis, charting, and alerting functionalities in Fiddler

The following example demonstrates how to configure a TextEmbedding enrichment to enable vector-based monitoring and visualization of your LLM application's text inputs. TextEmbedding enrichments convert unstructured text into high-dimensional vector representations that capture semantic meaning, enabling Fiddler to detect drift in the topics and themes of your prompts or responses over time.

In this configuration, the enrichment transforms text from the question column into numerical embeddings stored in question_embedding. These embeddings power Fiddler's 3D UMAP visualizations, allowing you to visually identify clusters of similar content, detect outliers, and spot shifts in user behavior patterns. The TextEmbedding feature also enables drift detection by comparing the distribution of embeddings between your baseline and production data, providing early warning when your application encounters significantly different types of content than expected.

from fiddler import TextEmbedding, ModelSpec

# Define a TextEmbedding enrichment for
fiddler_custom_features = [
        TextEmbedding(
            name='question_cf',          # Internal name for the custom feature
            source_column='question',    # Original text column to analyze
            column='question_embedding', # Generated embedding vector colum
        ),
    ]

model_spec = ModelSpec(
    inputs=['question'],
    custom_features=fiddler_custom_features,
)

Custom LLM Classifier

The Custom LLM Classifier enrichment leverages Llama3.1 8B to categorize input data based on a user-defined prompt and a specific set of categories. This provides flexibility for creating custom classification tasks tailored to specific needs, going beyond pre-defined enrichment types.

It works by dynamically constructing a prompt from the provided template and input data, then instructing the LLM to determine which of the specified categories best fits the prompt's context.


Embedding

Embeddings are numerical representations (vectors) generated by a model for input text. Each number within the vector represents a different dimension of the text input. The meaning of each number depends on how the embedding generating model was trained.

Fiddler uses publicly available embeddings to power the 3D UMAP experience. Because the same model generates all embeddings, the points will naturally cluster, enabling quick visual anomaly detection.

To create embeddings and leverage them for the UMAP visualization, you must create a new TextEmbedding enrichment on your unstructured text column. If you want to bring your own embeddings onto the Fiddler platform, you can direct Fiddler to consume the embeddings vector directly from your data.

View Embedding usage examples →


Centroid Distance

Fiddler uses KMeans to determine cluster membership for a given enrichment. The Centroid Distance enrichment provides information about the distance between the selected point and the closest centroid. Centroid Distance is automatically added if the TextEmbedding enrichment is created for any given model.

View Centroid Distance usage examples →


Personally Identifiable Information

The PII (Personally Identifiable Information) enrichment tool is a critical tool for detecting and flagging sensitive information in textual data. Whether user-entered or system-generated, this enrichment aims to identify instances where PII may be exposed, helping prevent privacy breaches and misuse of personal data. In an era where digital privacy concerns are paramount, mishandling or unintentionally leaking PII can have serious repercussions, including privacy violations, identity theft, and significant legal and reputational damage.

View PII Enrichment usage examples →


Evaluate

This enrichment provides n-gram-based metrics for comparing two passages of text, such as BLEU, ROUGE, and METEOR. Created initially to compare an AI-generated translation or summary to a human-generated one, these metrics have some use in RAG summarization tasks. They score highest when the reference and generated texts contain overlapping sequences. Additionally, these metrics are not as effective for long passages of text.

View Evaluate usage examples →


Textstat

The Textstat enrichment generates various text statistics such as character/letter count, flesh kinkaid, and other metrics on the target text column.

View Textstat usage examples →


Sentiment

The Sentiment enrichment uses NLTK's VADER lexicon to generate a score and corresponding sentiment for all specified columns. To enable, set the enrichment parameter to sentiment.

View Sentiment usage examples →


Profanity

The Profanity enrichment is designed to detect and flag the use of offensive or inappropriate language within textual content. This enrichment is essential for maintaining the integrity and professionalism of digital platforms, forums, social media, and any user-generated content areas.

The profanity enrichment uses searches the target text for words from the two sources below

View Profanity usage examples →


Toxicity

The toxicity enrichment classifies whether a piece of text is toxic. A RoBERTa-based model is fine-tuned with a mix of toxic and non-toxic data. The model predicts score between 0-1 where scores closer to 1 indicate toxicity.


Regex Match

The Regex Match enrichment evaluates text responses or content for adherence to specific patterns defined by regular expressions (regex). By accepting a regex as input, this metric offers a highly customizable way to check if a string column in the dataset matches the given pattern. This functionality is essential for scenarios requiring precise formatting, specific keyword inclusion, or adherence to particular linguistic structures.

View Regex Match usage examples →


Topic

The Topic enrichment leverages the capabilities of Zero Shot Classifier Zero Shot Classifier models to categorize textual inputs into a predefined list of topics, even without having been explicitly trained on those topics. This approach to text classification is known as zero-shot learning, a groundbreaking method in natural language processing (NLP) that enables models to classify text they haven't encountered during training intelligently. It's beneficial for applications that require understanding and organizing content dynamically across a broad range of subjects or themes.

View Topic usage examples →


Banned Keyword Detector

The Banned Keyword Detector enrichment is designed to scrutinize textual inputs for the presence of specified terms, with a particular focus on identifying content that includes potentially undesirable or restricted keywords. This enrichment operates based on a list of terms defined in its configuration, making it highly adaptable to various content moderation, compliance, and content filtering needs.

View Banned Keyword Detector usage examples →


Language Detector

The Language Detector enrichment identifies the language of the source text. This enrichment is based on a pretrained text identification model.

View Language Detector usage examples →


Answer Relevance

The Answer Relevance enrichment evaluates the pertinence of AI-generated responses to their corresponding prompts. This enrichment assesses whether a response accurately addresses the question or topic posed by the initial prompt, providing a simple yet effective binary outcome: relevant or not. Its primary function is to ensure that the output of AI systems, such as chatbots, virtual assistants, and content generation models, remains aligned with the user's informational needs and intentions.

View Answer Relevance usage examples →


Faithfulness

The Faithfulness (Groundedness) enrichment is a binary indicator that evaluates the accuracy and reliability of the facts presented in AI-generated text responses. It specifically assesses whether the information used in the response aligns with and is grounded in the provided context, often through referenced documents or data. This enrichment plays a critical role in ensuring that the AI's outputs are not only relevant but also factually accurate, given the context it was provided.

View Faithfulness usage examples →


Coherence

The Coherence enrichment assesses the logical flow and clarity of AI-generated text responses, ensuring they are structured in a way that makes sense from start to finish. This enrichment is crucial for evaluating whether the content produced by AI maintains a consistent theme, argument, or narrative, without disjointed thoughts or abrupt shifts in topic. Coherence is key to making AI-generated content not only understandable but also engaging and informative for the reader.

View Coherence usage examples →


Conciseness

The Conciseness enrichment evaluates the brevity and clarity of AI-generated text responses, ensuring that the information is presented in a straightforward and efficient manner. This enrichment identifies and rewards responses that effectively communicate their message without unnecessary elaboration or redundancy. In the realm of AI-generated content, where verbosity can dilute the message's impact or confuse the audience, maintaining conciseness is crucial for enhancing readability and user engagement.

View Conciseness usage examples →


Fast Safety

The Fast safety enrichment evaluates the safety of the text along eleven different dimensions: illegal, hateful, harassing, racist, sexist, violent, sexual, harmful, unethical, jailbreaking, roleplaying Fast safety is generated through the Fast Trust Models.

View Fast Safety usage examples →


Fast Faithfulness

The Fast faithfulness enrichment is designed to evaluate the accuracy and reliability of facts presented in AI-generated text responses. Fast faithfulness is generated through the Fast Trust Models.

View Fast Faithfulness usage examples →


Token Count

The Token Count enrichment counts the number of tokens in a string.

View Token Count usage examples →


SQL Validation

The SQL Validation enrichment is designed to evaluate different query dialects for syntax correctness.

View SQL Validation usage examples →


JSON Validation

The JSON Validation enrichment is designed to validate JSON for correctness and optionally against a user-defined schema for validation.

View JSON Validation usage examples →

Last updated

Was this helpful?