ContextRelevance

ContextRelevance

Evaluator to assess how relevant retrieved documents are to a user query.

The ContextRelevance evaluator measures whether retrieved documents provide sufficient context to answer a given question. This is a critical metric for RAG (Retrieval-Augmented Generation) pipelines to ensure the retrieval step is fetching useful information.

Key Features:

  • Retrieval Assessment: Determines if retrieved documents support the query

  • Three-Level Scoring: Returns high (1.0), medium (0.5), or low (0.0) relevance scores

  • RAG Pipeline Evaluation: Specifically designed for evaluating retrieval quality

  • Detailed Reasoning: Provides explanation for the relevance assessment

  • Fiddler API Integration: Uses Fiddler's built-in context relevance model

Use Cases:

  • RAG Systems: Evaluating retrieval quality in RAG pipelines

  • Search Systems: Assessing if search results are relevant to queries

  • Document Q&A: Verifying retrieved context supports the question

  • Knowledge Base Evaluation: Testing retrieval effectiveness

Scoring Logic:

  • 1.0 (High): Retrieved documents provide all necessary information to answer the query

  • 0.5 (Medium): Retrieved documents are on topic but don't fully support a complete answer

  • 0.0 (Low): Retrieved documents are not relevant to the query

Parameters

Parameter
Type
Required
Default
Description

user_query

str

None

The question or query being asked.

retrieved_documents

list[str]

None

The documents retrieved as context.

Returns

A Score object containing: : - value: 1.0 for high, 0.5 for medium, 0.0 for low relevance

  • label: "high", "medium", or "low"

  • reasoning: Detailed explanation of the assessment

Return type: Score

Example

circle-info

This evaluator uses Fiddler's built-in context relevance assessment model and requires an active connection to the Fiddler API.

name = 'context_relevance'

score()

Score the relevance of retrieved documents to a query.

Parameters

Parameter
Type
Required
Default
Description

user_query

str

None

The question or query being asked.

retrieved_documents

list[str]

None

The documents retrieved as context.

Returns

A Score object containing: : - value: 1.0 for high, 0.5 for medium, 0.0 for low relevance

  • label: "high", "medium", or "low"

  • reasoning: Detailed explanation of the assessment

Return type: Score

Last updated

Was this helpful?