# LiteLLM Integration

## Overview

[LiteLLM](https://docs.litellm.ai/) provides a unified interface for calling 100+ LLM providers. Fiddler supports two integration modes:

| Mode              | Best for                                                                        | Extra packages required |
| ----------------- | ------------------------------------------------------------------------------- | ----------------------- |
| **LiteLLM SDK**   | Applications calling LLM providers directly via `litellm.completion()`          | None                    |
| **LiteLLM Proxy** | Teams routing all LLM traffic through a centrally managed LiteLLM proxy gateway | None                    |

Both modes work by routing OpenTelemetry traces to Fiddler's OTLP ingestion endpoint using standard environment variables.

***

## LiteLLM SDK Integration

### Overview

LiteLLM includes a built-in OpenTelemetry integration. When you enable it and point the OTLP exporter at Fiddler, every `litellm.completion()` call is automatically traced — with no Fiddler-specific package required.

Fiddler natively ingests LiteLLM SDK-generated OTel traces and maps them to the Fiddler schema, giving you full observability over prompts, responses, and token usage across all LLM providers.

{% hint style="warning" %}
**Conversation tracking is not currently supported** for the LiteLLM integration. Session-level grouping of multi-turn conversations will be addressed in a future release as part of broader session attribute support.
{% endhint %}

### Architecture

```
┌──────────────────────────────────────┐
│   Your Application                   │
│   litellm.callbacks = ["otel"]       │
│   litellm.completion(...)            │
└──────────────────┬───────────────────┘
                   │ OTLP/HTTP
                   ▼
┌──────────────────────────────────────┐
│   Fiddler OTLP Ingestion Endpoint    │
│                                      │
│   ┌──────────────────────────────┐   │
│   │  LiteLLM SDK Span Mapper     │   │
│   │  - Extracts message content  │   │
│   │  - Maps token usage          │   │
│   │  - Maps to Fiddler schema    │   │
│   └──────────────────────────────┘   │
│              │                        │
│              ▼                        │
│   ┌──────────────────────────────┐   │
│   │  Analytics & Visualization   │   │
│   │  - Trace explorer            │   │
│   │  - Latency monitoring        │   │
│   └──────────────────────────────┘   │
└──────────────────────────────────────┘
```

### Prerequisites

* Fiddler account with a GenAI application already created
* `pip install litellm` (or `uv add litellm`)
* A valid LLM provider API key (e.g. `OPENAI_API_KEY` for OpenAI models)

### Quick Start

#### Step 1: Set environment variables

Set these before starting your application:

```bash
# Fiddler OTel ingestion
export OTEL_EXPORTER_OTLP_ENDPOINT="https://your-fiddler-instance.com"
export OTEL_EXPORTER_OTLP_HEADERS="authorization=Bearer <your-fiddler-token>,fiddler-application-id=<your-app-uuid>"
export OTEL_RESOURCE_ATTRIBUTES="application.id=<your-app-uuid>"

# LLM provider key (name varies by provider)
export OPENAI_API_KEY="your-openai-key"
```

To find your application UUID: navigate to your application in the Fiddler UI and copy the UUID from the URL or application settings.

#### Step 2: Enable the built-in OTel callback

Add one line to your application startup:

```python
import litellm

litellm.callbacks = ["otel"]
```

#### Step 3: Make completions as normal

No other code changes are required:

```python
response = litellm.completion(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is the capital of France?"},
    ],
)
print(response.choices[0].message.content)
```

Every call is now automatically traced and exported to Fiddler.

#### Step 4: Verify traces are arriving

Open the Fiddler UI and navigate to your application's **Trace Explorer**. You should see the trace within a few seconds of making your first completion call.

### What Gets Captured

**Message Content**

| Fiddler Field    | Description                               |
| ---------------- | ----------------------------------------- |
| System prompt    | The system instructions sent to the model |
| User input       | The most recent user turn                 |
| Assistant output | The model's response                      |

**Token Usage**

| Attribute                    | Description                 |
| ---------------------------- | --------------------------- |
| `gen_ai.usage.input_tokens`  | Prompt tokens consumed      |
| `gen_ai.usage.output_tokens` | Completion tokens generated |
| `gen_ai.usage.total_tokens`  | Total tokens                |

**Model Information**

| Attribute               | Description                           |
| ----------------------- | ------------------------------------- |
| `gen_ai.request.model`  | Model requested (e.g. `gpt-4o-mini`)  |
| `gen_ai.response.model` | Model actually used                   |
| `gen_ai.system`         | Provider (e.g. `openai`, `anthropic`) |

### Supported Features

| Feature               | Support         | Notes                                                               |
| --------------------- | --------------- | ------------------------------------------------------------------- |
| LLM call tracing      | ✅ Full          | Prompts, responses, token usage                                     |
| Token usage           | ✅ Full          | Input, output, and total tokens                                     |
| Model information     | ✅ Full          | Requested and actual model, provider                                |
| Cost tracking         | ❌ Not supported | LiteLLM SDK does not emit `gen_ai.cost.*` attributes                |
| Tool spans            | ❌ Not supported | LiteLLM SDK does not emit tool spans                                |
| Conversation tracking | ❌ Not supported | Session-level grouping of multi-turn conversations is not available |

### Troubleshooting

**Traces not appearing in Fiddler**

Check that all three environment variables are set correctly:

```bash
echo $OTEL_EXPORTER_OTLP_ENDPOINT
echo $OTEL_EXPORTER_OTLP_HEADERS
echo $OTEL_RESOURCE_ATTRIBUTES
```

Check that `litellm.callbacks = ["otel"]` is set before your first `litellm.completion()` call.

**Check the `fiddler-application-id` header and `application.id` resource attribute are both set**

Both are required. `fiddler-application-id` must be a valid UUID for an existing Fiddler application, otherwise spans will be dropped during ingestion.

***

## LiteLLM Proxy Integration

### Overview

[LiteLLM](https://docs.litellm.ai/) is an OpenAI-compatible proxy gateway that lets you call 100+ LLM providers through a single API. When LiteLLM proxy is configured to emit OpenTelemetry traces, Fiddler automatically detects and ingests them — no additional SDK or code changes required.

Fiddler includes a purpose-built mapper for LiteLLM proxy traces that handles the proxy's specific span format, attribute layout, and operation naming conventions. This gives you full observability over every LLM call routed through your proxy: prompts, responses, token usage, cost metadata, and latency — across all models and providers in one place.

### Architecture

```
┌──────────────────────────────────────┐
│   Your Application                   │
│   (any language, any framework)      │
└──────────────────┬───────────────────┘
                   │ OpenAI-compatible API
                   ▼
┌──────────────────────────────────────┐
│   LiteLLM Proxy                      │
│   OTEL_EXPORTER_OTLP_ENDPOINT set   │
│   service.name = "litellm" (default) │
└──────────────────┬───────────────────┘
                   │ OTLP/gRPC or OTLP/HTTP
                   ▼
┌──────────────────────────────────────┐
│   Fiddler OTLP Ingestion Endpoint    │
│                                      │
│   ┌──────────────────────────────┐   │
│   │  LiteLLM Span Mapper         │   │
│   │  - Detects litellm spans     │   │
│   │  - Classifies llm / chain    │   │
│   │  - Extracts message content  │   │
│   │  - Maps to Fiddler schema    │   │
│   └──────────────────────────────┘   │
│              │                        │
│              ▼                        │
│   ┌──────────────────────────────┐   │
│   │  Analytics & Visualization   │   │
│   │  - Trace explorer            │   │
│   │  - Cost dashboards           │   │
│   │  - Latency monitoring        │   │
│   └──────────────────────────────┘   │
└──────────────────────────────────────┘
```

### When to Use This Integration

Use the LiteLLM proxy integration when:

* You are already running LiteLLM proxy as your LLM gateway
* You want to monitor all LLM traffic centrally regardless of underlying provider (OpenAI, Anthropic, Bedrock, etc.)
* You want cost attribution and latency tracking without instrumenting individual applications

### Quick Start

#### Step 1: Configure LiteLLM proxy to emit OpenTelemetry

Set the following environment variables before starting the proxy:

```bash
export OTEL_EXPORTER_OTLP_ENDPOINT="https://your-fiddler-instance.com"
export OTEL_EXPORTER_OTLP_HEADERS="authorization=Bearer <your-fiddler-token>,fiddler-application-id=<your-app-uuid>"
export OTEL_RESOURCE_ATTRIBUTES="application.id=<your-app-uuid>"

litellm --config config.yaml
```

Or set them inside your LiteLLM proxy `config.yaml`:

```yaml
general_settings:
  otel: true

environment_variables:
  OTEL_EXPORTER_OTLP_ENDPOINT: "https://your-fiddler-instance.com"
  OTEL_EXPORTER_OTLP_HEADERS: "authorization=Bearer <your-fiddler-token>,fiddler-application-id=<your-app-uuid>"
  OTEL_RESOURCE_ATTRIBUTES: "application.id=<your-app-uuid>"
```

#### Step 2: Set your Fiddler application ID

Two environment variables carry your application ID and both are required:

* **`OTEL_RESOURCE_ATTRIBUTES`** — sets `application.id` on every OTel resource, which Fiddler uses to route traces to the correct application
* **`OTEL_EXPORTER_OTLP_HEADERS`** — includes `fiddler-application-id` as an HTTP header for authentication and routing at the ingestion endpoint

To find your application UUID: navigate to your application in the Fiddler UI and copy the UUID from the URL or application settings.

#### Step 3: Verify traces are arriving

Make a test request through your proxy:

```bash
curl -X POST https://your-litellm-proxy/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello"}]
  }'
```

Then open the Fiddler UI and navigate to your application's **Trace Explorer**. You should see the trace within a few seconds.

### What Gets Captured

#### Span Types

LiteLLM proxy emits several span types per request. Fiddler classifies them as follows:

| LiteLLM Span Name                  | Operation                             | Fiddler Type | Description                                        |
| ---------------------------------- | ------------------------------------- | ------------ | -------------------------------------------------- |
| `litellm_request`                  | `chat` / `completion`                 | `llm`        | Parent completion span                             |
| `raw_gen_ai_request`               | `chat` / `acompletion` / `completion` | `llm`        | Raw provider-level span with full request/response |
| `Received Proxy Server Request`    | `acompletion`                         | `llm`        | Top-level server span                              |
| `self`, `router`, `proxy_pre_call` | —                                     | `chain`      | Internal LiteLLM infrastructure spans              |
| Embeddings, other ops              | non-completion                        | `chain`      | Non-chat operations                                |

#### Captured Attributes

**Message Content**

LiteLLM writes full conversation history as JSON on the span (not as span events). Fiddler extracts:

| Fiddler Field    | Source                                                      | Description                               |
| ---------------- | ----------------------------------------------------------- | ----------------------------------------- |
| System prompt    | First `role: system` message in `gen_ai.input.messages`     | The system instructions sent to the model |
| User input       | Last `role: user` message in `gen_ai.input.messages`        | The most recent user turn                 |
| Assistant output | First `role: assistant` message in `gen_ai.output.messages` | The model's response                      |

{% hint style="info" %}
If you have disabled message logging in LiteLLM (`turn_off_message_logging: true`), the message content fields will be absent from traces. Token counts and cost metadata are still captured.
{% endhint %}

**Token Usage**

| Attribute                    | Description                 |
| ---------------------------- | --------------------------- |
| `gen_ai.usage.input_tokens`  | Prompt tokens consumed      |
| `gen_ai.usage.output_tokens` | Completion tokens generated |
| `gen_ai.usage.total_tokens`  | Total tokens                |

**Model Information**

| Attribute               | Description                                     |
| ----------------------- | ----------------------------------------------- |
| `gen_ai.request.model`  | Model requested (e.g. `gpt-4o-mini`)            |
| `gen_ai.response.model` | Model actually used (may differ from requested) |
| `gen_ai.system`         | Provider (e.g. `openai`, `anthropic`)           |

**Cost Metadata** (stored as `fiddler.span.user.*`)

LiteLLM emits cost fields under `gen_ai.cost.*`. These are preserved in Fiddler as user-visible span attributes:

| Attribute                     | Description                          |
| ----------------------------- | ------------------------------------ |
| `gen_ai.cost.total_cost`      | Total cost of the request            |
| `gen_ai.cost.prompt_cost`     | Cost attributed to prompt tokens     |
| `gen_ai.cost.completion_cost` | Cost attributed to completion tokens |

**Proxy Metadata** (stored as `fiddler.span.user.*`)

LiteLLM proxy emits `metadata.*` attributes containing API key, team, user, and routing information. These are preserved as user-visible span attributes for auditing and cost attribution.

### Supported Features

| Feature               | Support         | Notes                                                                |
| --------------------- | --------------- | -------------------------------------------------------------------- |
| LLM call tracing      | ✅ Full          | Prompts, responses, token usage                                      |
| Cost tracking         | ✅ Full          | Via `gen_ai.cost.*` attributes                                       |
| Provider attribution  | ✅ Full          | Via `gen_ai.system`                                                  |
| Proxy metadata        | ✅ Full          | API key, team, user, routing info                                    |
| Tool spans            | ❌ Not supported | LiteLLM proxy does not emit tool spans natively                      |
| Infrastructure spans  | ⚠️ As `chain`   | `self`, `router` spans are captured but classified as generic chains |
| Conversation tracking | ❌ Not supported | Session-level grouping of multi-turn conversations is not available  |

### Troubleshooting

**Traces not appearing in Fiddler**

Check that OTel is enabled in LiteLLM:

```bash
echo $OTEL_EXPORTER_OTLP_ENDPOINT
echo $OTEL_EXPORTER_OTLP_HEADERS
echo $OTEL_RESOURCE_ATTRIBUTES
```

Check the `fiddler-application-id` header and `application.id` resource attribute are both set:

Both are required. `fiddler-application-id` must be a valid UUID for an existing Fiddler application, otherwise spans will be dropped during ingestion.

**Check `service.name` is `"litellm"`**

Fiddler detects LiteLLM proxy spans by `service.name`. LiteLLM proxy sets this to `"litellm"` by default. If you have overridden `OTEL_SERVICE_NAME`, ensure it is set to `"litellm"` or `"litellm-proxy"`:

```bash
export OTEL_SERVICE_NAME="litellm"
```

**Message content missing from traces**

LiteLLM's message logging may be disabled. Check your config for:

```yaml
litellm_settings:
  turn_off_message_logging: true  # This suppresses gen_ai.input/output.messages
```

Remove or set to `false` to re-enable message capture.

**Spans classified as `chain` instead of `llm`**

This happens for internal LiteLLM infrastructure spans (`self`, `router`, `proxy_pre_call`) and for non-chat operations (embeddings). This is expected behavior — only completion operations are classified as `llm` spans.

***

## Related Documentation

* [OpenTelemetry Integration](https://docs.fiddler.ai/integrations/agentic-ai-and-llm-frameworks/agentic-ai/opentelemetry-integration) — Manual OTel instrumentation for custom frameworks
* [Strands Agents SDK](https://docs.fiddler.ai/integrations/agentic-ai-and-llm-frameworks/agentic-ai/strands-sdk) — Native monitoring for Strands agent applications
* [LangGraph SDK](https://docs.fiddler.ai/integrations/agentic-ai-and-llm-frameworks/agentic-ai/langgraph-sdk) — Auto-instrumentation for LangGraph applications
* [LiteLLM OTel documentation](https://docs.litellm.ai/docs/proxy/logging#opentelemetry) — LiteLLM's official OpenTelemetry setup guide

***

:question: Questions? [Talk](https://www.fiddler.ai/contact-sales) to a product expert or [request](https://www.fiddler.ai/demo) a demo.

:bulb: Need help? Contact us at <support@fiddler.ai>.
