Coherence

API reference for Coherence

Coherence

Evaluator to assess the coherence and logical flow of a response.

The Coherence evaluator measures whether a response is well-structured, logically consistent, and flows naturally from one idea to the next. This metric is important for ensuring that responses are easy to follow and understand, with clear connections between different parts of the text.

Key Features:

  • Coherence Assessment: Determines if the response has logical flow and structure

  • Binary Scoring: Returns 1.0 for coherent responses, 0.0 for incoherent ones

  • Optional Context: Can optionally use a prompt for context-aware evaluation

  • Detailed Reasoning: Provides explanation for the coherence assessment

  • Fiddler API Integration: Uses Fiddler’s built-in coherence evaluation model

Use Cases:

  • Content Quality: Ensuring responses are well-structured and logical

  • Educational Content: Verifying explanations flow logically

  • Technical Documentation: Checking if instructions are coherent

  • Creative Writing: Assessing narrative flow and consistency

  • Conversational AI: Ensuring responses make sense in context

Scoring Logic:

  • 1.0 (Coherent): Response has clear logical flow and structure

  • 0.0 (Incoherent): Response lacks logical flow or has structural issues

Parameters

Parameter
Type
Required
Default
Description

response

str

None

The response to evaluate for coherence.

prompt

str, optional

None

The original prompt that generated the response. Used for context-aware coherence evaluation.

Returns

A Score object containing: : - name: “is_coherent”

  • evaluator_name: “Coherence”

  • value: 1.0 if coherent, 0.0 if incoherent

  • label: String representation of the boolean result

  • reasoning: Explanation for the coherence assessment Return type: Score

Raises

ValueError – If the response is empty or None, or if no scores are returned from the API.

Example

This evaluator uses Fiddler’s built-in coherence assessment model and requires an active connection to the Fiddler API. The optional prompt parameter can provide additional context for more accurate coherence evaluation, especially when the response needs to be evaluated in relation to a specific question or task.

name = 'coherence'

score()

Score the coherence of a response.

Parameters

Parameter
Type
Required
Default
Description

prompt

str

None

The original prompt that generated the response.

response

str

None

The response to evaluate for coherence.

Returns

A Score object for coherence assessment. Return type: Score

Last updated

Was this helpful?