# Selecting Enrichments

{% hint style="info" %}
For a complete reference of all LLM enrichments with output columns and sub-metrics, see the [LLM Observability Metrics Reference](/reference/llm-observability-metrics.md).
{% endhint %}

Fiddler offers enrichments out of the box to monitor different aspects of LLM applications. Use the below table to select the right enrichment for your specific use case.

This table provides high level information on the metric, the enrichment to use to measure the metric, if the metric uses LLMs, and if so, what LLM it uses.

If you have a use case not covered by the below enrichments out of the box, please contact your administrator.

| Metric                                                                       | Metric Category | Description                                                                                                                                                                                                   | Enrichment                  | LLM Used? | LLM Type                 |
| ---------------------------------------------------------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------- | --------- | ------------------------ |
| [Faithfulness](/observability/llm/enrichments.md#faithfulness)               | Hallucination   | This enrichment identifies the accuracy and reliability of facts presented in AI-generated texts                                                                                                              | `faithfulness`              | Yes       | OpenAI                   |
| [Fast Faithfulness](/observability/llm/enrichments.md#fast-faithfulness)     | Hallucination   | This enrichment identifies the accuracy and reliability of facts presented in AI-generated texts. It is generated by Fiddler's Fast Trust Models                                                              | `ftl_response_faithfulness` | Yes       | Fiddler Fast Trust Model |
| [Answer Relevance](/observability/llm/enrichments.md#answer-relevance)       | Hallucination   | This enrichment measures the pertinence of AI-generated responses to their inputs                                                                                                                             | `answer_relevance`          | Yes       | OpenAI                   |
| [Conciseness](/observability/llm/enrichments.md#conciseness)                 | Hallucination   | This enrichment evaluates the brevity and clarity of AI-generated responses                                                                                                                                   | `conciseness`               | Yes       | OpenAI                   |
| [Coherence](/observability/llm/enrichments.md#coherence)                     | Hallucination   | This enrichment assesses the logical flow and clarity of AI-generated responses                                                                                                                               | `coherence`                 | Yes       | OpenAI                   |
| [Fast Safety](/observability/llm/enrichments.md#fast-safety)                 | Safety          | This enrichment generates 11 different safety metrics to measure texts upon. These metrics are: `illegal, hateful, harassing, racist, sexist, violent, sexual, harmful, unethical, jailbreaking, roleplaying` | `ftl_prompt_safety`         | Yes       | Fiddler Fast Trust Model |
| [PII](/observability/llm/enrichments.md#personally-identifiable-information) | Safety          | This enrichment flags the presence of sensitive information within textual data                                                                                                                               | `pii`                       | No        |                          |
| [Regex Match](/observability/llm/enrichments.md#regex-match)                 | Safety          | This enrichment compares the text with a regular expression string                                                                                                                                            | `regex_match`               | No        |                          |
| [Topic](/observability/llm/enrichments.md#topic)                             | Safety          | This enrichment classifies the text into several preset dimensions using a zero-shot classifier                                                                                                               | `topic_model`               | No        |                          |
| [Banned Keywords](/observability/llm/enrichments.md#banned-keyword-detector) | Safety          | This enrichment detects the presence of banned keywords configured by the user                                                                                                                                | `banned_keywords`           | No        |                          |
| [Profanity](/observability/llm/enrichments.md#profanity)                     | Safety          | This enrichment flags the use of offensive or inappropriate language                                                                                                                                          | `profanity`                 | No        |                          |
| [Language Detection](/observability/llm/enrichments.md#language-detector)    | Safety          | This enrichment identifies the language of the source text                                                                                                                                                    | `language_detection`        | No        |                          |
| [Evaluate](/observability/llm/enrichments.md#evaluate)                       | Text Statistics | This enrichment provides classic text evaluation methods such as BLEU, ROUGE, and Meteor                                                                                                                      | `evaluate`                  | No        |                          |
| [Sentiment](/observability/llm/enrichments.md#sentiment)                     | Text Statistics | This enrichment provides sentiment analysis of the target text                                                                                                                                                | `sentiment`                 | No        |                          |
| [TextStat](/observability/llm/enrichments.md#textstat)                       | Text Statistics | This enrichment provides various text statistics such as character/letter count, Flesch-Kincaid, and others                                                                                                   | `textstat`                  | No        |                          |
| [Token Count](/observability/llm/enrichments.md#token-count)                 | Text Statistics | The Token Count enrichment is designed to count the number of tokens in a string.                                                                                                                             | `token_count`               | No        |                          |
| [SQLValidation](/observability/llm/enrichments.md#sql-validation)            | Text Validation | Evaluates different query dialects for syntax correctness.                                                                                                                                                    | `sql_validation`            | No        |                          |
| [JSONValidation](/observability/llm/enrichments.md#json-validation)          | Text Validation | Validates JSON for correctness and optionally against a user-defined schema.                                                                                                                                  | `json_validation`           | No        |                          |


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.fiddler.ai/observability/llm/selecting-enrichments.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
