Skip to main content

Overview

This guide shows you how to integrate Latitude Telemetry into an existing application that uses the official Vercel AI SDK. After completing these steps:
  • Every Vercel AI SDK call (e.g. generateText) can be captured as a log in Latitude.
  • Logs are grouped under a prompt, identified by a path, inside a Latitude project.
  • You can inspect inputs/outputs, measure latency, and debug Vercel AI SDK-powered features from the Latitude dashboard.
You’ll keep calling Vercel AI SDK exactly as you do today — Telemetry simply observes and enriches those calls.

Requirements

Before you start, make sure you have:
  • A Latitude account and API key
  • A Latitude project ID
  • A Node.js-based project that uses the Vercel AI SDK
That’s it — prompts do not need to be created ahead of time.

Steps

1

Install requirements

Add the Latitude Telemetry package to your project:
npm add @latitude-data/telemetry
2

Initialize Latitude Telemetry

Create a single LatitudeTelemetry instance when your app startsYou must pass the Vercel AI SDK so Telemetry can instrument it.
telemetry.ts
import { LatitudeTelemetry } from '@latitude-data/telemetry'

export const telemetry = new LatitudeTelemetry(process.env.LATITUDE_API_KEY)
The Telemetry instance should only be created once. Any Vercel AI SDK client instantiated after this will be automatically traced.
3

Wrap your Vercel AI SDK-powered feature

Wrap the code that calls Vercel AI SDK using telemetry.capture.
import { telemetry } from './telemetry'
import { generateText } from 'ai'
import { openai } from '@ai-sdk/openai'

export async function generateSupportReply(input: string) {
  return telemetry.capture(
    {
      projectId: 123, // The ID of your project in Latitude
      path: 'generate-support-reply', // Add a path to identify this prompt in Latitude
    },
    async () => {

      // Your regular LLM-powered feature code here
      const { text } = await generateText({
        model: openai('gpt-4o'),
        prompt: input,
        experimental_telemetry: {
          isEnabled: true, // Make sure to enable experimental telemetry
        },
      })

      // You can return anything you want — the value is passed through unchanged
      return text;
    }
  )
}
Important: The experimental_telemetry.isEnabled flag must be set to true on generateText for Latitude Telemetry to capture these calls.
The path:
  • Identifies the prompt in Latitude
  • Can be new or existing
  • Should not contain spaces or special characters (use letters, numbers, - _ / .)

Seeing your logs in Latitude

Once your feature is wrapped, logs will appear automatically.
  1. Open the prompt in your Latitude dashboard (identified by path)
  2. Go to the Traces section
  3. Each execution will show:
    • Input and output messages
    • Model and token usage
    • Latency and errors
    • One trace per feature invocation
Each Vercel AI SDK call appears as a child span under the captured prompt execution, giving you a full, end-to-end view of what happened.

That’s it

No changes to your Vercel AI SDK calls, no special return values, and no extra plumbing — just wrap the feature you want to observe.