AnswerRelevance
API reference for AnswerRelevance
AnswerRelevance
Evaluator to assess how well an answer addresses a given question.
The AnswerRelevance evaluator measures whether an LLM’s answer is relevant and directly addresses the question being asked. This is a critical metric for ensuring that LLM responses stay on topic and provide meaningful value to users.
Key Features:
Relevance Assessment: Determines if the answer directly addresses the question
Binary Scoring: Returns 1.0 for relevant answers, 0.0 for irrelevant ones
Detailed Reasoning: Provides explanation for the relevance assessment
Fiddler API Integration: Uses Fiddler’s built-in relevance evaluation model
Use Cases:
Q&A Systems: Ensuring answers stay on topic
Customer Support: Verifying responses address user queries
Educational Content: Checking if explanations answer the question
Research Assistance: Validating that responses are relevant to queries
Scoring Logic:
1.0 (Relevant): Answer directly addresses the question with relevant information
0.0 (Irrelevant): Answer doesn’t address the question or goes off-topic
Parameters
prompt
str
✗
None
The question being asked.
response
str
✗
None
The LLM’s response to evaluate.
Returns
A Score object containing: : - value: 1.0 if relevant, 0.0 if irrelevant
label: String representation of the boolean result
reasoning: Detailed explanation of the assessment Return type: Score
Example
name = 'answer_relevance'
score()
Score the relevance of an answer to a question.
Parameters
prompt
str
✗
None
The question being asked.
response
str
✗
None
The LLM’s response to evaluate.
Returns
A Score object containing: : - value: 1.0 if relevant, 0.0 if irrelevant
label: String representation of the boolean result
reasoning: Detailed explanation of the assessment Return type: Score
Last updated
Was this helpful?