Announcing Latitude V2 — an agent issue monitoring platform
A clear path to reliable AI
A clear path to reliable AI
A clear path to reliable AI
A clear path to reliable AI
A clear path to reliable AI
Production failures become clear signals. Signals become fixes.
80%
80%
80%
Fewer critical errors reaching production
Fewer critical errors reaching production
8x
8x
8x
Faster prompt iteration using GEPA (Agrawal et al., 2025)
Faster prompt iteration using GEPA (Agrawal et al., 2025)
25%
25%
25%
Accuracy increase in the first 2 weeks
Accuracy increase in the first 2 weeks

Capture real inputs, outputs, and context from live traffic. Understand what your system is actually doing, not what you expect it to do.
View docs
Full traces
Observe your AI’s behaviour in the most comprehensive way
Usage statistics
Keep track of the token usage and regulate expenses

Observability
Capture real inputs, outputs, and context from live traffic. Understand what your system is actually doing, not what you expect it to do.
Observe your AI’s behaviour in the most comprehensive way
Keep track of the token usage and regulate expenses
Most tools help you see what your AI is doing. The hard part is knowing where it fails and what to change.
Trusted by teams building AI products at scale
Enter the reliability loop
A proven method to understand, evaluate, and fix your AI products
Capture real inputs, outputs, and context from live traffic to understand what your system is actually doing

Annotate responses with real human judgment. Turn intent into a signal the system can learn from.

Automatically group failures into recurring issues, detect common failure modes and keep an eye on escalating issues.

Convert real failure modes into evals that run continuously & catch regressions before they reach users.

5. Prompt manager + optimizer
Automatically test prompt variations against real evals, then let the system optimize prompts using GEPA to reduce failures over time.

Enter the reliability loop
Enter the reliability loop
A proven method to understand, evaluate, and fix your AI products
A proven method to understand, evaluate, and fix your AI products
Observability
Capture real inputs, outputs, and context from live traffic to understand what your system is actually doing

Observability
Capture real inputs, outputs, and context from live traffic to understand what your system is actually doing

Annotations
Annotate responses with real human judgment. Turn intent into a signal the system can learn from.

Annotations
Annotate responses with real human judgment. Turn intent into a signal the system can learn from.

Issue discovery
Automatically group failures into surface recurring issues, see breaks down points across users and use cases.

Issue discovery
Automatically group failures into surface recurring issues, see breaks down points across users and use cases.

Optimize via GEPA
Automatically test prompt variations against real evals, then let the system optimize prompts to reduce failures over time.

Optimize via GEPA
Automatically test prompt variations against real evals, then let the system optimize prompts to reduce failures over time.

Automatic evals
Convert real failure modes into evals that run continuously & catch regressions before they reach users.

Automatic evals
Convert real failure modes into evals that run continuously & catch regressions before they reach users.

Get started now
Start with visibility.
Grow into reliability.
Start the reliability loop with lightweight instrumentation. Go deeper when you’re ready.
Start the reliability loop with lightweight instrumentation. Go deeper when you’re ready.
View docs
import { LatitudeTelemetry } from '@latitude-data/telemetry' import OpenAI from 'openai' const telemetry = new LatitudeTelemetry( process.env.LATITUDE_API_KEY, { instrumentations: { openai: OpenAI } } ) async function generateSupportReply(input: string) { return telemetry.capture( { projectId: 123, // The ID of your project in Latitude path: 'generate-support-reply', // Add a path to identify this prompt in Latitude }, async () => { const client = new OpenAI() const completion = await client.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: input }], }) return completion.choices[0].message.content } ) }
import { LatitudeTelemetry } from '@latitude-data/telemetry' import OpenAI from 'openai' const telemetry = new LatitudeTelemetry( process.env.LATITUDE_API_KEY, { instrumentations: { openai: OpenAI } } ) async function generateSupportReply(input: string) { return telemetry.capture( { projectId: 123, // The ID of your project in Latitude path: 'generate-support-reply', // Add a path to identify this prompt in Latitude }, async () => { const client = new OpenAI() const completion = await client.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: input }], }) return completion.choices[0].message.content } ) }
import { LatitudeTelemetry } from '@latitude-data/telemetry' import OpenAI from 'openai' const telemetry = new LatitudeTelemetry( process.env.LATITUDE_API_KEY, { instrumentations: { openai: OpenAI } } ) async function generateSupportReply(input: string) { return telemetry.capture( { projectId: 123, // The ID of your project in Latitude path: 'generate-support-reply', // Add a path to identify this prompt in Latitude }, async () => { const client = new OpenAI() const completion = await client.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: input }], }) return completion.choices[0].message.content } ) }
import { LatitudeTelemetry } from '@latitude-data/telemetry' import OpenAI from 'openai' const telemetry = new LatitudeTelemetry( process.env.LATITUDE_API_KEY, { instrumentations: { openai: OpenAI } } ) async function generateSupportReply(input: string) { return telemetry.capture( { projectId: 123, // The ID of your project in Latitude path: 'generate-support-reply', // Add a path to identify this prompt in Latitude }, async () => { const client = new OpenAI() const completion = await client.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: input }], }) return completion.choices[0].message.content } ) }
import { LatitudeTelemetry } from '@latitude-data/telemetry' import OpenAI from 'openai' const telemetry = new LatitudeTelemetry( process.env.LATITUDE_API_KEY, { instrumentations: { openai: OpenAI } } ) async function generateSupportReply(input: string) { return telemetry.capture( { projectId: 123, // The ID of your project in Latitude path: 'generate-support-reply', // Add a path to identify this prompt in Latitude }, async () => { const client = new OpenAI() const completion = await client.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: input }], }) return completion.choices[0].message.content } ) }
import { LatitudeTelemetry } from '@latitude-data/telemetry' import OpenAI from 'openai' const telemetry = new LatitudeTelemetry( process.env.LATITUDE_API_KEY, { instrumentations: { openai: OpenAI } } ) async function generateSupportReply(input: string) { return telemetry.capture( { projectId: 123, // The ID of your project in Latitude path: 'generate-support-reply', // Add a path to identify this prompt in Latitude }, async () => { const client = new OpenAI() const completion = await client.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: input }], }) return completion.choices[0].message.content } ) }
import { LatitudeTelemetry } from '@latitude-data/telemetry' import OpenAI from 'openai' const telemetry = new LatitudeTelemetry( process.env.LATITUDE_API_KEY, { instrumentations: { openai: OpenAI } } ) async function generateSupportReply(input: string) { return telemetry.capture( { projectId: 123, // The ID of your project in Latitude path: 'generate-support-reply', // Add a path to identify this prompt in Latitude }, async () => { const client = new OpenAI() const completion = await client.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: input }], }) return completion.choices[0].message.content } ) }
import { LatitudeTelemetry } from '@latitude-data/telemetry' import OpenAI from 'openai' const telemetry = new LatitudeTelemetry( process.env.LATITUDE_API_KEY, { instrumentations: { openai: OpenAI } } ) async function generateSupportReply(input: string) { return telemetry.capture( { projectId: 123, // The ID of your project in Latitude path: 'generate-support-reply', // Add a path to identify this prompt in Latitude }, async () => { const client = new OpenAI() const completion = await client.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: input }], }) return completion.choices[0].message.content } ) }
Instrument once
Add OTEL-compatible telemetry to your existing LLM calls to capture prompts, inputs, outputs, and context.
This gets the loop running and gives you visibility from day one
Learn from production
Review traces, add feedback, and uncover failure patterns as your system runs.
Steps 1–4 of the loop work out of the box
Go further when it matters
Use Latitude as the source of truth for your prompts to enable automatic optimization and close the loop.
The full reliability loop, when you’re ready
Get started for free
Build AI
you can trust
Build AI
you can trust
Works with Vercel AI SDK, LangChain, OpenAI SDK, and most common model providers.
Frequently asked questions
How can I see where my AI fails in production?
Is it easy to set up evals in Latitude?
How does Latitude turn AI failures into improvements?
Does Latitude work with our existing stack?
Latitude Data S.L. 2026
All rights reserved.

Proyecto financiado con el apoyo de ACCIÓ - Generalitat de Catalunya
Latitude Data S.L. 2026
All rights reserved.

Proyecto financiado con el apoyo de ACCIÓ - Generalitat de Catalunya
Proyecto financiado con el apoyo de ACCIÓ - Generalitat de Catalunya

Proyecto financiado con el apoyo de ACCIÓ - Generalitat de Catalunya
Latitude Data S.L. 2026
All rights reserved.

Proyecto financiado con el apoyo de ACCIÓ - Generalitat de Catalunya
Latitude Data S.L. 2026
All rights reserved.

Proyecto financiado con el apoyo de ACCIÓ - Generalitat de Catalunya