Skip to main content

Overview

This guide shows you how to integrate Latitude Telemetry into an existing application that uses LlamaIndex. After completing these steps:
  • Each LlamaIndex query or pipeline execution can be captured as a log in Latitude.
  • Logs are attached to a specific prompt and version in Latitude.
  • You can annotate, evaluate, and debug your LlamaIndex-powered features from the Latitude dashboard.
You keep using LlamaIndex as usual — Telemetry observes calls made through the LlamaIndex library.

Requirements

Before you start, make sure you have:
  • A Latitude account and API key.
  • At least one prompt created in Latitude.
  • A Node.js-based project that uses llamaindex.

Steps

1

Install requirements

Add the Latitude Telemetry package to your project:
npm add @latitude-data/telemetry @opentelemetry/api
2

Initialize Latitude Telemetry with LlamaIndex

Create a LatitudeTelemetry instance and pass the LlamaIndex module as an instrumentation.
import { LatitudeTelemetry } from '@latitude-data/telemetry'
import * as LlamaIndex from 'llamaindex'

export const telemetry = new LatitudeTelemetry('your-latitude-api-key', {
  instrumentations: {
    llamaindex: LlamaIndex, // Enables automatic tracing for LlamaIndex
  },
})
3

Wrap your LlamaIndex-powered feature

Wrap the code that calls LlamaIndex with a Telemetry prompt span, and execute your query or pipeline inside that span.
import { context } from '@opentelemetry/api'
import { BACKGROUND } from '@latitude-data/telemetry'

import { agent } from "@llamaindex/workflow";
import { Settings } from "llamaindex";
import { openai } from "@llamaindex/openai";

export async function answerQuestion(input: string) {
  const $prompt = telemetry.prompt(BACKGROUND(), {
    promptUuid: 'your-prompt-uuid',
    versionUuid: 'your-version-uuid',
  })

  await context
    .with($prompt.context, async () => {

      Settings.llm = openai({
          apiKey: process.env.OPENAI_API_KEY,
          model: 'gpt-4o',
      });

      const myAgent = agent({});
      const response = await myAgent.run(prompt);

      // Use response here...
    })
    .then(() => $prompt.end())
    .catch((error) => $prompt.fail(error as Error))
    .finally(() => telemetry.flush())
}

Seeing your logs in Latitude

Once you’ve wrapped your LlamaIndex-powered feature, you can see your logs in Latitude.
  1. Go to the Traces section of your prompt in Latitude.
  2. You should see new entries every time your query runs, including:
    • Query input and generated answer
    • Underlying provider calls (when instrumented)
    • Latency and error information