set_llm_context

set_llm_context

Sets additional context information on a language model instance.

This context provides environmental or operational information that will be attached to all spans created for this model. Use this to add relevant metadata such as user preferences, session state, or runtime conditions that influenced the LLM’s behavior. This is valuable for debugging and understanding why the model produced specific outputs.

Supports both BaseLanguageModel instances and RunnableBinding objects. When a RunnableBinding is provided, the context is automatically set on the underlying bound object (which must be a BaseLanguageModel).

For more information on RunnableBinding, see: https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableBinding.html

Parameters

Parameter
Type
Required
Default
Description

llm

BaseLanguageModel | RunnableBinding

-

The language model instance or binding.

context

str

-

The context string to add. This will be included in span attributes as ‘gen_ai.llm.context’.

Raises

TypeError – If a RunnableBinding is provided but its bound object is not a BaseLanguageModel. Return type: None

Examples

Basic usage with ChatOpenAI:

from langchain_openai import ChatOpenAI
from fiddler_langgraph.tracing.instrumentation import set_llm_context

llm = ChatOpenAI(model="gpt-4")
set_llm_context(llm, "User prefers concise responses")

With user preferences:

Using with RunnableBinding:

Adding session context:

Last updated

Was this helpful?