Open-source LLM tracing that catches what dashboards can't

Your agent returned a confident wrong answer. The error rate stayed at zero. Breadcrumb catches these issues before your users do.

Breadcrumb LLM tracing dashboard showing traces, token counts, latency, and costs

Issues found before your users find them.

A monitoring agent that reads every trace, learns your project, and surfaces what matters.

Auto-detected

Search agent returning confident answers from empty context window

hallucination
Investigating

Retrieval agent skipping 40% of available documents in summarize workflow

context loss
Needs review

Cost spike: generateText calls doubled token usage after prompt template change

!cost

Other tools log your traces. Breadcrumb understands them.

Every other tracing tool expects you to find the problems yourself. Breadcrumb's agent reads every trace, builds context over time, and gets smarter about what matters in your project.

hallucinationAgent cited policy doc not in retrieval set2m ago
intent mismatchResponded about order #4812 instead of #48215m ago
context lossDropped 3 of 7 source documents after tool call8m ago
loop detectedSame failing tool call retried 4 times then abandoned12m ago
cost anomalyToken usage doubled across generateText after template change19m ago
instruction driftCorrect answer but ignored user constraint on format23m ago
hallucinationGenerated citation for a paper that doesn't exist31m ago
context lossUser name forgotten mid-conversation after tool use38m ago
hallucinationAgent cited policy doc not in retrieval set2m ago
intent mismatchResponded about order #4812 instead of #48215m ago
context lossDropped 3 of 7 source documents after tool call8m ago
loop detectedSame failing tool call retried 4 times then abandoned12m ago
cost anomalyToken usage doubled across generateText after template change19m ago
instruction driftCorrect answer but ignored user constraint on format23m ago
hallucinationGenerated citation for a paper that doesn't exist31m ago
context lossUser name forgotten mid-conversation after tool use38m ago

Three lines of code. Never miss an issue.

Works with Vercel AI SDK out of the box. Import, initialize, pass telemetry, stay informed.

import { init } from "@breadcrumb-sdk/core";
import { initAiSdk } from "@breadcrumb-sdk/ai-sdk";

const bc = init({ apiKey, baseUrl });
const { telemetry } = initAiSdk(bc);

const { text } = await generateText({
  // ...
  experimental_telemetry: telemetry("summarize"),
});

Open source. Self-hosted. Your data.

Deploy on Railway, Fly, or your own servers. Fork it, extend it, run it however you want. No usage fees, no vendor lock-in.