Skip to main content

Overview

This guide shows you how to integrate Latitude Telemetry into an existing application that uses the Vercel AI SDK. After completing these steps:
  • Each generateText call can be captured as a log in Latitude.
  • Logs are attached to a specific prompt and version in Latitude.
  • You can annotate, evaluate, and debug your Vercel AI SDK-powered features from the Latitude dashboard.
You keep using the Vercel AI SDK as usual — Telemetry observes calls when telemetry is enabled in generateText.

Requirements

Before you start, make sure you have:
  • A Latitude account and API key.
  • At least one prompt created in Latitude.
  • A Node.js-based project that uses the Vercel AI SDK (e.g. ai, @ai-sdk/openai, etc.).

Steps

1

Install requirements

Add the Latitude Telemetry package to your project:
npm add @latitude-data/telemetry @opentelemetry/api
2

Initialize Latitude Telemetry

Create a LatitudeTelemetry instance. No specific instrumentation is required for the Vercel AI SDK.
import { LatitudeTelemetry } from '@latitude-data/telemetry'

export const telemetry = new LatitudeTelemetry('your-latitude-api-key')
3

Wrap your Vercel AI SDK-powered feature

Wrap the code that calls generateText with a Telemetry prompt span, and make sure telemetry is enabled in the Vercel AI SDK call.
import { context } from '@opentelemetry/api'
import { BACKGROUND } from '@latitude-data/telemetry'
import { generateText } from 'ai'
import { openai } from '@ai-sdk/openai'

export async function generateSupportReply(input: string) {
  const $prompt = telemetry.prompt(BACKGROUND(), {
    promptUuid: 'your-prompt-uuid',
    versionUuid: 'your-version-uuid',
  })

  await context
    .with($prompt.context, async () => {
      const { text } = await generateText({
        model: openai('gpt-4o'),
        prompt: input,
        experimental_telemetry: {
          isEnabled: true, // Make sure to enable experimental telemetry
        },
      })

      // Use text here...
    })
    .then(() => $prompt.end())
    .catch((error) => $prompt.fail(error as Error))
    .finally(() => telemetry.flush())
}
Important: The experimental_telemetry.isEnabled flag must be set to true on generateText for Latitude Telemetry to capture these calls.

Seeing your logs in Latitude

Once you’ve wrapped your Vercel AI SDK-powered feature and enabled telemetry in generateText, you can see your logs in Latitude.
  1. Go to the Traces section of your prompt in Latitude.
  2. You should see new entries every time your code is executed, including:
    • Prompt text and generated output
    • Underlying provider/model used
    • Latency and error information