Skip to main content
Latitude Telemetry lets you connect your existing LLM-powered application to Latitude in 5 minutes, without changing how you call your model providers. Once connected, every LLM execution becomes a feature-scoped log in Latitude that you can inspect, annotate, and evaluate — instead of dumping all traces into a single, unstructured bucket.

Why use Latitude Telemetry?

With Telemetry you can:
  • Get feature-level observability
    Attach executions to specific prompts and versions instead of “one giant trace store”. Slice logs by feature, environment, user, or any metadata you send.
  • Understand real usage and performance
    See which prompts and models are actually used in production, along with latency, error rates, and input/output examples.
  • Annotate real executions
    Your team can label logs (e.g. “great answer”, “hallucination”, “formatting issue”), turning production traffic into a high-signal dataset.
  • Create custom evaluations for each feature
    Use LLM-as-judge, programmatic checks, or human-in-the-loop evaluations to continuously score outputs for each prompt or feature.
  • Automatically surface issues and bottlenecks
    Combine logs, annotations and evaluations to find broken prompts, regressions after a change, or slow/high-cost paths.
All of this works on top of your existing stack — you keep calling OpenAI, Anthropic, Bedrock, etc. directly, and Telemetry observes those calls.

Using capture()

The capture() method wraps your code and manages the telemetry span lifecycle automatically. The span starts when capture begins and ends when your code completes.
await telemetry.capture(
  { projectId: 123, path: 'my-feature' },
  async () => {
    // Your LLM code here - span ends when callback completes
    const response = await openai.chat.completions.create({ ... })
    return response.choices[0].message.content
  }
)

Streaming responses

For streaming responses, consume the stream inside your capture block so the span covers the entire operation:
await telemetry.capture(
  { projectId: 123, path: 'my-feature' },
  async () => {
    const stream = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [{ role: 'user', content: input }],
      stream: true,
    })

    // Consume stream inside capture - span stays open until done
    for await (const chunk of stream) {
      const content = chunk.choices[0]?.delta?.content
      if (content) {
        res.write(content)
      }
    }
    res.end()
  }
)
Why consume inside capture? When you consume the stream inside the capture block, the span duration accurately reflects the total time of the operation (including streaming). All child spans from provider instrumentation are properly nested under your capture span.

How Telemetry fits into your stack (high level)

At a high level, integrating Telemetry looks like this:
  1. Install the Telemetry package in your app.
  2. Wrap each feature or prompt execution so Latitude can tie logs back to a specific prompt and version.
  3. See your logs in Latitude and annotate them with your own metadata.

Supported integrations

Latitude Telemetry supports a wide range of providers and frameworks, allowing you to connect your existing LLM-powered application to Latitude in minutes.

More integrations

OpenTelemetry (OTLP ingest)

If you already use OpenTelemetry, you can export OTLP traces directly to Latitude.
  • URL (Latitude Cloud): https://gateway.latitude.so/api/v3/traces
  • Auth: Authorization: Bearer YOUR_API_KEY
  • Formats: OTLP Protobuf (application/x-protobuf) or OTLP JSON (application/json)
Example (OpenTelemetry Collector):
receivers:
  otlp:
    protocols:
      grpc:
      http:

exporters:
  otlp_http/latitude:
    traces_endpoint: https://gateway.latitude.so/api/v3/traces
    headers:
      Authorization: Bearer ${LATITUDE_API_KEY}

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [otlp_http/latitude]
If you are self-hosting, replace the hostname with your Gateway base URL.

Next steps

  1. Choose the provider/framework your application already uses (or OpenTelemetry OTLP ingest).
  2. Open its integration page.
  3. Follow the step-by-step guide to install and initialize Latitude Telemetry for that stack.