Skip to main content

Overview

This guide shows you how to integrate Latitude Telemetry into an existing application that uses the official OpenAI SDK. After completing these steps:
  • Every OpenAI call (e.g. chat.completions.create) can be captured as a log in Latitude.
  • Logs are attached to a specific prompt and version in Latitude.
  • You can annotate, evaluate, and debug your OpenAI-powered features from the Latitude dashboard.
You’ll keep calling OpenAI directly — Telemetry simply observes and enriches those calls.

Requirements

Before you start, make sure you have:
  • A Latitude account and API key.
  • At least one prompt created in Latitude (so you have a promptUuid and versionUuid to associate logs with).
  • A Node.js-based project that uses the OpenAI SDK.

Steps

1

Install requirements

Add the Latitude Telemetry package to your project:
npm add @latitude-data/telemetry @opentelemetry/api
2

Initialize Latitude Telemetry with OpenAI

Create a LatitudeTelemetry instance and pass the OpenAI SDK as an instrumentation.
import { LatitudeTelemetry } from '@latitude-data/telemetry'
import OpenAI from 'openai'

export const telemetry = new LatitudeTelemetry(process.env.LATITUDE_API_KEY!, {
  instrumentations: {
    openai: OpenAI, // This enables automatic tracing for the OpenAI SDK
  },
})
Import telemetry (and optionally openai) wherever you need to run prompts.
3

Wrap your OpenAI-powered feature

Wrap the code that calls OpenAI with a Telemetry prompt span, and execute your OpenAI call inside that span.
import { context } from '@opentelemetry/api'
import { BACKGROUND } from '@latitude-data/telemetry'
import OpenAI from 'openai'

export async function generateSupportReply(input: string) {

  const $prompt = telemetry.prompt(BACKGROUND(), {
    promptUuid: 'your-prompt-uuid',
    versionUuid: 'your-version-uuid', // or "live", depending on your setup
  })

  await context
    .with($prompt.context, async () => {
      // Your regular LLM-powered feature code here:
      const client = new AzureOpenAI({
          apiKey: process.env.AZURE_API_KEY,
          endpoint: process.env.AZURE_OPENAI_ENDPOINT,
          apiVersion: "2024-10-21",
      });

      const completion = await client.chat.completions.create({
          stream: false,
          messages: [
              {
                  role: "system",
                  content: prompt,
              },
          ],
          model: 'gpt-4o',
      });

      // ...
    })
    .then(() => $prompt.end())
    .catch((error) => $prompt.fail(error as Error))
    .finally(() => telemetry.flush())
}

Seeing your logs in Latitude

Once you’ve wrapped your Azure-powered feature, you can see your logs in Latitude.
  1. Go to the Traces section of your prompt in Latitude.
  2. You should see new entries every time your code is executed, including:
    • Input/output messages
    • Model name
    • Latency and error information