Enrichments

Introduction to Enrichments

Enrichments are specialized evaluation features that augment your LLM application data with automatically generated trust and safety metrics. By defining enrichments during model onboarding, you instruct Fiddler to analyze published prompts and responses using purpose-built models that assess dimensions like faithfulness, toxicity, relevance, and safety compliance.

The enrichment framework processes your application's inputs and outputs to generate quantitative scores that integrate directly with Fiddler's monitoring dashboards, alerting systems, and root cause analysis tools. This approach enables proactive detection of model drift, content safety violations, and performance degradation without requiring manual evaluation or external API dependencies.

  • Enrichments are custom features designed to augment the data provided in events

  • Enrichments augment existing columns with new metrics that are defined during model onboarding

  • The new metrics are available for use within the analysis, charting, and alerting functionalities in Fiddler

The following example demonstrates how to configure a TextEmbedding enrichment to enable vector-based monitoring and visualization of your LLM application's text inputs. TextEmbedding enrichments convert unstructured text into high-dimensional vector representations that capture semantic meaning, enabling Fiddler to detect drift in the topics and themes of your prompts or responses over time.

In this configuration, the enrichment transforms text from the question column into numerical embeddings stored in question_embedding. These embeddings power Fiddler's 3D UMAP visualizations, allowing you to visually identify clusters of similar content, detect outliers, and spot shifts in user behavior patterns. The TextEmbedding feature also enables drift detection by comparing the distribution of embeddings between your baseline and production data, providing early warning when your application encounters significantly different types of content than expected.

import fiddler as fdl

# Define a TextEmbedding enrichment for vector-based monitoring
fiddler_custom_features = [
    fdl.TextEmbedding(
        name='question_cf',          # Internal name for the custom feature
        source_column='question',    # Original text column to analyze
        column='question_embedding', # Generated embedding vector column
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['question'],
    custom_features=fiddler_custom_features,
)

Custom LLM Classifier

The Custom LLM Classifier enrichment leverages Llama3.1 8B to categorize input data based on a user-defined prompt and a specific set of categories. This provides flexibility for creating custom classification tasks tailored to specific needs, going beyond pre-defined enrichment types.

It works by dynamically constructing a prompt from the provided template and input data, then instructing the LLM to determine which of the specified categories best fits the prompt's context.


Embedding

Embeddings are numerical representations (vectors) generated by a model for input text. Each number within the vector represents a different dimension of the text input. The meaning of each number depends on how the embedding generating model was trained.

Fiddler uses publicly available embeddings to power the 3D UMAP experience. Because the same model generates all embeddings, the points will naturally cluster, enabling quick visual anomaly detection.

To create embeddings and leverage them for the UMAP visualization, you must create a new TextEmbedding enrichment on your unstructured text column. If you want to bring your own embeddings onto the Fiddler platform, you can direct Fiddler to consume the embeddings vector directly from your data.

View Usage Examples

Example 1: Fiddler-Generated Embeddings

This example automatically generates text embeddings on a text column called prompt:

import fiddler as fdl

fiddler_custom_features = [
    fdl.TextEmbedding(
        name='Prompt TextEmbedding',  # name of generated column (for internal use, required)
        source_column='prompt',  # source - raw text
        column='Enrichment Prompt Embedding',  # name of the vector output
        n_clusters=5,  # Number of clusters for k-means clustering
        n_tags=5,  # Top n tags used as summary tags per cluster
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['prompt'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

Enrichment Prompt Embedding

vector

Embeddings corresponding to string column prompt

Prompt Text Embedding

integer

Column for internal use

Prompt Text Embedding - Centroid Distance

float

Centroid Distance to string column prompt

Example 2: User-Provided Embeddings

This example demonstrates leveraging an existing vector column of text embeddings called pre_existing_prompt_embedding:

Vector embeddings must use data type List(float) for compatibility. Using CSV for source data often requires preprocessing after initial loading into a pandas dataframe.

import ast
import numpy as np
import pandas as pd
import fiddler as fdl

df = pd.read_csv(PATH_TO_SAMPLE_CSV)
# Convert string representation to list
df['pre_existing_prompt_embedding'] = df['pre_existing_prompt_embedding'].apply(
    ast.literal_eval
)

fiddler_custom_features = [
    fdl.TextEmbedding(
        name='User-provided Text Embedding',  # name of the generated column
        source_column='prompt',  # source - raw text
        column='pre_existing_prompt_embedding',  # name of your vector column
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['prompt'],
    metadata=['pre_existing_prompt_embedding'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

User-provided Text Embedding

integer

Column for internal use

User-provided Text Embedding - Centroid Distance

float

Centroid Distance to string column prompt


Centroid Distance

Fiddler uses KMeans to determine cluster membership for a given enrichment. The Centroid Distance enrichment provides information about the distance between the selected point and the closest centroid. Centroid Distance is automatically added if the TextEmbedding enrichment is created for any given model.

Centroid Distance columns are automatically generated when you create a TextEmbedding enrichment. See the Embedding section above for complete usage examples.


Personally Identifiable Information

The PII (Personally Identifiable Information) enrichment tool is a critical tool for detecting and flagging sensitive information in textual data. Whether user-entered or system-generated, this enrichment aims to identify instances where PII may be exposed, helping prevent privacy breaches and misuse of personal data. In an era where digital privacy concerns are paramount, mishandling or unintentionally leaking PII can have serious repercussions, including privacy violations, identity theft, and significant legal and reputational damage.

PII enrichment is integrated with Presidio for entity detection.

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='Rag PII',
        enrichment='pii',
        columns=['question'],  # one or more columns
        allow_list=['fiddler'],  # Optional: list of strings that are white listed
        score_threshold=0.85,  # Optional: float value for minimum possible confidence
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['question'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Rag PII (question)

bool

Whether any PII was detected

FDL Rag PII (question) Matches

str

What matches in raw text were flagged as potential PII (ex. 'Douglas MacArthur,Korean')

FDL Rag PII (question) Entities

str

What entities these matches were tagged as (ex. 'PERSON')

Supported PII Entity Types:

CREDIT_CARD, CRYPTO, DATE_TIME, EMAIL_ADDRESS, IBAN_CODE, IP_ADDRESS, LOCATION, PERSON, PHONE_NUMBER, URL, US_SSN, US_DRIVER_LICENSE, US_ITIN, US_PASSPORT


Evaluate

This enrichment provides n-gram-based metrics for comparing two passages of text, such as BLEU, ROUGE, and METEOR. Created initially to compare an AI-generated translation or summary to a human-generated one, these metrics have some use in RAG summarization tasks. They score highest when the reference and generated texts contain overlapping sequences. Additionally, these metrics are not as effective for long passages of text.

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='QA Evaluate',
        enrichment='evaluate',
        columns=['correct_answer', 'generated_answer'],
        config={
            'reference_col': 'correct_answer',  # required
            'prediction_col': 'generated_answer',  # required
            'metrics': ['bleu', 'rouge', 'meteor'],  # optional, this is the default
        }
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['question'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL QA Evaluate (bleu)

float

BLEU score: Measures precision of word n-grams

FDL QA Evaluate (rouge1)

float

ROUGE-1 score: Unigram recall

FDL QA Evaluate (rouge2)

float

ROUGE-2 score: Bigram recall

FDL QA Evaluate (rougel)

float

ROUGE-L score: Longest common subsequence

FDL QA Evaluate (rougelsum)

float

ROUGE-L summary score

FDL QA Evaluate (meteor)

float

METEOR score: Precision, recall, and semantic matching


Textstat

The Textstat enrichment generates various text statistics such as character/letter count, Flesch-Kincaid, and other metrics on the target text column.

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='Text Statistics',
        enrichment='textstat',
        columns=['question'],
        config={
            'statistics': [
                'char_count',
                'dale_chall_readability_score',
            ]
        },
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['question'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Text Statistics (question) char_count

int

Character count of string in question column

FDL Text Statistics (question) dale_chall_readability_score

float

Readability score of string in question column

Supported Statistics:

char_count, letter_count, miniword_count, words_per_sentence, polysyllabcount, lexicon_count, syllable_count, sentence_count, flesch_reading_ease, smog_index, flesch_kincaid_grade, coleman_liau_index, automated_readability_index, dale_chall_readability_score, difficult_words, linsear_write_formula, gunning_fog, long_word_count, monosyllabcount


Sentiment

The Sentiment enrichment uses NLTK's VADER lexicon to generate a score and corresponding sentiment for all specified columns. To enable, set the enrichment parameter to sentiment.

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='Question Sentiment',
        enrichment='sentiment',
        columns=['question'],
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['question'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Question Sentiment (question) compound

float

Raw score of sentiment

FDL Question Sentiment (question) sentiment

string

One of positive, negative, or neutral


Profanity

The Profanity enrichment is designed to detect and flag the use of offensive or inappropriate language within textual content. This enrichment is essential for maintaining the integrity and professionalism of digital platforms, forums, social media, and any user-generated content areas.

The profanity enrichment searches the target text for words from the two sources below:

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='Profanity',
        enrichment='profanity',
        columns=['prompt', 'response'],
        config={'output_column_name': 'contains_profanity'},
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['prompt', 'response'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Profanity (prompt) contains_profanity

bool

Indicates if input contains profanity in the value of the prompt column

FDL Profanity (response) contains_profanity

bool

Indicates if input contains profanity in the value of the response column


Toxicity

The toxicity enrichment classifies whether a piece of text is toxic. A RoBERTa-based model is fine-tuned with a mix of toxic and non-toxic data. The model predicts score between 0-1 where scores closer to 1 indicate toxicity.


Regex Match

The Regex Match enrichment evaluates text responses or content for adherence to specific patterns defined by regular expressions (regex). By accepting a regex as input, this metric offers a highly customizable way to check if a string column in the dataset matches the given pattern. This functionality is essential for scenarios requiring precise formatting, specific keyword inclusion, or adherence to particular linguistic structures.

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='Regex - only digits',
        enrichment='regex_match',
        columns=['prompt', 'response'],
        config={
            'regex': '^\d+$',
        }
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['prompt', 'doc_0', 'doc_1', 'doc_2', 'response'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Regex - only digits

category

Match or No Match, depending on the regex specified in the config matching in the string


Topic

The Topic enrichment leverages the capabilities of Zero Shot Classifier Zero Shot Classifier models to categorize textual inputs into a predefined list of topics, even without having been explicitly trained on those topics. This approach to text classification is known as zero-shot learning, a groundbreaking method in natural language processing (NLP) that enables models to classify text they haven't encountered during training intelligently. It's beneficial for applications that require understanding and organizing content dynamically across a broad range of subjects or themes.

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='Topics',
        enrichment='topic_model',
        columns=['response'],
        config={'topics': ['politics', 'economy', 'astronomy']},
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['prompt', 'doc_0', 'doc_1', 'doc_2', 'response'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Topics (response) topic_model_scores

list[float]

Probability of the given column in each of the topics specified in the Enrichment config. Each float value indicates the probability of the given input being classified in the corresponding topic, in the same order as topics. Each value will be between 0 and 1. The sum of values does not equal 1, as each classification is performed independently of other topics.

FDL Topics (response) max_score_topic

string

Topic with the maximum score from the list of topic names specified in the Enrichment config


Banned Keyword Detector

The Banned Keyword Detector enrichment is designed to scrutinize textual inputs for the presence of specified terms, with a particular focus on identifying content that includes potentially undesirable or restricted keywords. This enrichment operates based on a list of terms defined in its configuration, making it highly adaptable to various content moderation, compliance, and content filtering needs.

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='Banned KW',
        enrichment='banned_keywords',
        columns=['prompt', 'response'],
        config={
            'output_column_name': 'contains_banned_kw',
            'banned_keywords': ['nike', 'adidas', 'puma'],
        },
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['prompt', 'doc_0', 'doc_1', 'doc_2', 'response'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Banned KW (prompt) contains_banned_kw

bool

Indicates if input contains one of the specified banned keywords in the value of the prompt column

FDL Banned KW (response) contains_banned_kw

bool

Indicates if input contains one of the specified banned keywords in the value of the response column


Language Detector

The Language Detector enrichment identifies the language of the source text. This enrichment is based on a pretrained text identification model and leverages fasttext models for language detection.

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='Language',
        enrichment='language_detection',
        columns=['prompt'],
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['prompt'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Language (prompt) language

string

Language prediction for input text

FDL Language (prompt) language_probability

float

Confidence probability of language prediction


Answer Relevance

The Answer Relevance enrichment evaluates the pertinence of AI-generated responses to their corresponding prompts. This enrichment assesses whether a response accurately addresses the question or topic posed by the initial prompt, providing a simple yet effective binary outcome: relevant or not. Its primary function is to ensure that the output of AI systems, such as chatbots, virtual assistants, and content generation models, remains aligned with the user's informational needs and intentions.

View Usage Example

Python Configuration:

import fiddler as fdl

answer_relevance_config = {
    'prompt': 'prompt_col',
    'response': 'response_col',
}

fiddler_custom_features = [
    fdl.Enrichment(
        name='Answer Relevance',
        enrichment='answer_relevance',
        columns=['prompt_col', 'response_col'],
        config=answer_relevance_config,
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['prompt_col', 'response_col'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Answer Relevance

bool

Binary metric, which is True if response is relevant to the prompt


Faithfulness

The Faithfulness (Groundedness) enrichment is a binary indicator that evaluates the accuracy and reliability of the facts presented in AI-generated text responses. It specifically assesses whether the information used in the response aligns with and is grounded in the provided context, often through referenced documents or data. This enrichment plays a critical role in ensuring that the AI's outputs are not only relevant but also factually accurate, given the context it was provided.

View Usage Example

Python Configuration:

import fiddler as fdl

faithfulness_config = {
    'context': ['doc_0', 'doc_1', 'doc_2'],
    'response': 'response_col',
}

fiddler_custom_features = [
    fdl.Enrichment(
        name='Faithfulness',
        enrichment='faithfulness',
        columns=['doc_0', 'doc_1', 'doc_2', 'response_col'],
        config=faithfulness_config,
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['doc_0', 'doc_1', 'doc_2', 'response_col'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Faithfulness

bool

Binary metric, which is True if the facts used in response are correctly used from the context columns


Coherence

The Coherence enrichment assesses the logical flow and clarity of AI-generated text responses, ensuring they are structured in a way that makes sense from start to finish. This enrichment is crucial for evaluating whether the content produced by AI maintains a consistent theme, argument, or narrative, without disjointed thoughts or abrupt shifts in topic. Coherence is key to making AI-generated content not only understandable but also engaging and informative for the reader.

View Usage Example

Python Configuration:

import fiddler as fdl

coherence_config = {
    'response': 'response_col',
}

fiddler_custom_features = [
    fdl.Enrichment(
        name='Coherence',
        enrichment='coherence',
        columns=['response_col'],
        config=coherence_config,
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['doc_0', 'doc_1', 'doc_2', 'response_col'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Coherence

bool

Binary metric, which is True if response makes coherent arguments that flow well


Conciseness

The Conciseness enrichment evaluates the brevity and clarity of AI-generated text responses, ensuring that the information is presented in a straightforward and efficient manner. This enrichment identifies and rewards responses that effectively communicate their message without unnecessary elaboration or redundancy. In the realm of AI-generated content, where verbosity can dilute the message's impact or confuse the audience, maintaining conciseness is crucial for enhancing readability and user engagement.

View Usage Example

Python Configuration:

import fiddler as fdl

conciseness_config = {
    'response': 'response_col',
}

fiddler_custom_features = [
    fdl.Enrichment(
        name='Conciseness',
        enrichment='conciseness',
        columns=['response'],
        config=conciseness_config,
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['prompt', 'doc_0', 'doc_1', 'doc_2', 'response'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Conciseness

bool

Binary metric, which is True if response is concise and not overly verbose


Fast Safety

The Fast safety enrichment evaluates the safety of the text along eleven different dimensions: illegal, hateful, harassing, racist, sexist, violent, sexual, harmful, unethical, jailbreaking, roleplaying. Fast safety is generated through the Fast Trust Models.

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='Prompt Safety',
        enrichment='ftl_prompt_safety',
        columns=['prompt'],
        config={'classifiers': ['jailbreaking', 'illegal']}  # Optional: specify dimensions
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['prompt'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

For each dimension specified (or all 11 dimensions if not specified):

Column
Type
Description

FDL Prompt Safety (prompt) dimension

bool

Binary metric, which is True if the input is deemed unsafe, False otherwise

FDL Prompt Safety (prompt) dimension score

float

Confidence probability of safety prediction

Supported Dimensions: illegal, hateful, harassing, racist, sexist, violent, sexual, harmful, unethical, jailbreaking, roleplaying


Fast Faithfulness

The Fast faithfulness enrichment is designed to evaluate the accuracy and reliability of facts presented in AI-generated text responses. Fast faithfulness is generated through the Fast Trust Models.

The faithfulness threshold defaults to 0.5 but can be adjusted in the configuration to control the sensitivity of the faithfulness scoring. Lower thresholds result in stricter faithfulness detection, while higher thresholds are more permissive.

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='Faithfulness',
        enrichment='ftl_response_faithfulness',
        columns=['context', 'response'],
        config={
            'context_field': 'context',
            'response_field': 'response',
            'threshold': '0.5'  # Optional parameter, default is 0.5
        }
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['context', 'response'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Faithfulness faithful

bool

Binary metric, which is True if the facts used in response are correctly used from the context columns

FDL Faithfulness faithful score

float

Confidence probability of faithfulness prediction


Token Count

The Token Count enrichment counts the number of tokens in a string.

This enrichment uses the tiktoken library for token counting.

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='TokenCount',
        enrichment='token_count',
        columns=['question'],
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['question'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

FDL Token Counts (question)

int

Number of tokens in the string


SQL Validation

The SQL Validation enrichment is designed to evaluate different query dialects for syntax correctness.

Query validation is syntax based and does not check against any existing schema or databases for validity.

View Usage Example

Python Configuration:

import fiddler as fdl

# The following dialects are supported:
# 'athena', 'bigquery', 'clickhouse', 'databricks', 'doris', 'drill', 'duckdb',
# 'hive', 'materialize', 'mysql', 'oracle', 'postgres', 'presto', 'prql',
# 'redshift', 'risingwave', 'snowflake', 'spark', 'spark2', 'sqlite',
# 'starrocks', 'tableau', 'teradata', 'trino', 'tsql'

fiddler_custom_features = [
    fdl.Enrichment(
        name='SQLValidation',
        enrichment='sql_validation',
        columns=['query_string'],
        config={
            'dialect': 'mysql'
        }
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['query_string'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

SQL Validator valid

bool

True if the query string is syntactically valid for the specified dialect, False if not

SQL Validator errors

str

If syntax errors are found they will be present as a JSON serialized string containing a list of dictionaries describing the errors


JSON Validation

The JSON Validation enrichment is designed to validate JSON for correctness and optionally against a user-defined schema for validation.

This enrichment uses the python-jsonschema library for JSON schema validation. The defined validation_schema must be a valid python-jsonschema schema.

View Usage Example

Python Configuration:

import fiddler as fdl

fiddler_custom_features = [
    fdl.Enrichment(
        name='JSONValidation',
        enrichment='json_validation',
        columns=['json_string'],
        config={
            'strict': 'true',
            'validation_schema': {
                '$schema': 'https://json-schema.org/draft/2020-12/schema',
                'type': 'object',
                'properties': {
                    'prop_1': {'type': 'number'}
                    # ... additional properties
                },
                'required': ['prop_1'],  # ... additional required fields
                'additionalProperties': False
            }
        }
    ),
]

model_spec = fdl.ModelSpec(
    inputs=['json_string'],
    custom_features=fiddler_custom_features,
)

Generated Columns:

Column
Type
Description

JSON Validator valid

bool

String is valid JSON

JSON Validator errors

str

If the string failed to parse to JSON any parsing errors will be returned as a serialized JSON list of dictionaries

Last updated

Was this helpful?