Skip to main content

Overview

This guide shows you how to integrate Latitude Telemetry into an existing application that calls models via Google AI Platform. After completing these steps:
  • Every AI Platform model invocation can be captured as a log in Latitude.
  • Logs are attached to a specific prompt and version in Latitude.
  • You can annotate, evaluate, and debug your AI Platform-powered features from the Latitude dashboard.
You’ll keep calling Google AI Platform directly — Telemetry simply observes and enriches those calls.

Requirements

Before you start, make sure you have:
  • A Latitude account and API key.
  • At least one prompt created in Latitude.
  • A Node.js-based project that already calls Google AI Platform using your preferred client library.

Steps

1

Install requirements

Add the Latitude Telemetry package to your project:
npm add @latitude-data/telemetry @opentelemetry/api
2

Initialize Latitude Telemetry

Create a LatitudeTelemetry instance. No specific instrumentation is required for Google AI Platform.
import * as AIPlatform from '@google-cloud/aiplatform';
import { LatitudeTelemetry } from '@latitude-data/telemetry'

export const telemetry = new LatitudeTelemetry('your-latitude-api-key', {
  instrumentations: {
    aiplatform: AIPlatform, // Enables automatic tracing for AIPlatform
  },
})
3

Wrap your AI Platform-powered feature

Wrap the code that calls Google AI Platform with a Telemetry prompt span, and execute your model call inside that span.
import { context } from '@opentelemetry/api'
import { BACKGROUND } from '@latitude-data/telemetry'
// Import your existing AI Platform client here

export async function generateSupportReply(input: string) {
  const $prompt = telemetry.prompt(BACKGROUND(), {
    promptUuid: 'your-prompt-uuid',
    versionUuid: 'your-version-uuid',
  })

  await context
    .with($prompt.context, async () => {
      // Example: use your existing AI Platform client
      // const response = await aiPlatformClient.predict({
      //   endpoint: 'projects/.../locations/.../endpoints/...',
      //   instances: [{ content: input }],
      // })

      // Use response here...
    })
    .then(() => $prompt.end())
    .catch((error) => $prompt.fail(error as Error))
    .finally(() => telemetry.flush())
}

Seeing your logs in Latitude

Once you’ve wrapped your AI Platform-powered feature, you can see your logs in Latitude.
  1. Go to the Traces section of your prompt in Latitude.
  2. You should see new entries every time your code is executed, including:
    • Input/output payloads
    • Model or endpoint name
    • Latency and error information