Skip to main content

Overview

This guide shows you how to integrate Latitude Telemetry into an existing application that uses Google Vertex AI via @google-cloud/vertexai. After completing these steps:
  • Every Vertex AI generation can be captured as a log in Latitude.
  • Logs are attached to a specific prompt and version in Latitude.
  • You can annotate, evaluate, and debug your Vertex AI-powered features from the Latitude dashboard.
You’ll keep calling Vertex AI directly — Telemetry simply observes and enriches those calls.

Requirements

Before you start, make sure you have:
  • A Latitude account and API key.
  • At least one prompt created in Latitude.
  • A Google Cloud project with Vertex AI enabled.
  • A Node.js-based project that uses @google-cloud/vertexai.

Steps

1

Install requirements

Add the Latitude Telemetry package to your project:
npm add @latitude-data/telemetry @opentelemetry/api
2

Initialize Latitude Telemetry with Vertex AI

Create a LatitudeTelemetry instance and pass the Vertex AI module as an instrumentation.
import { LatitudeTelemetry } from '@latitude-data/telemetry'
import * as VertexAI from '@google-cloud/vertexai'

export const telemetry = new LatitudeTelemetry('your-latitude-api-key', {
  instrumentations: {
    vertexai: VertexAI, // Enables automatic tracing for Vertex AI
  },
})
3

Wrap your Vertex AI-powered feature

Wrap the code that calls Vertex AI with a Telemetry prompt span, and execute your Vertex AI call inside that span.
import { context } from '@opentelemetry/api'
import { BACKGROUND } from '@latitude-data/telemetry'
import { VertexAI } from '@google-cloud/vertexai'

export async function generateSupportReply(input: string) {
  const $prompt = telemetry.prompt(BACKGROUND(), {
    promptUuid: 'your-prompt-uuid',
    versionUuid: 'your-version-uuid',
  })

  await context
    .with($prompt.context, async () => {
      const vertexAI = new VertexAI({
        project: process.env.GOOGLE_CLOUD_PROJECT!,
        location: 'us-central1',
      })

      const model = vertexAI.getGenerativeModel({
        model: 'gemini-3-pro',
      })

      const result = await model.generateContent(input)
      const response = await result.response

      // Use response here...
    })
    .then(() => $prompt.end())
    .catch((error) => $prompt.fail(error as Error))
    .finally(() => telemetry.flush())
}

Seeing your logs in Latitude

Once you’ve wrapped your Vertex AI-powered feature, you can see your logs in Latitude.
  1. Go to the Traces section of your prompt in Latitude.
  2. You should see new entries every time your code is executed, including:
    • Input/output messages
    • Model name
    • Latency and error information