Blog | LaunchDarkly

3 min read Original article ↗

Search

Search icon

A closer look at the 2026 Forrester Total Economic Impact™ of LaunchDarkly

Forrester finds LaunchDarkly delivers a 379% ROI through faster releases and lower risk.

Orchestrate and safeguard AI agents with AI Configs

LaunchDarkly AI Configs helps you control AI agents at runtime.

LaunchDarkly

Online evals in AI Configs is now GA

Online evals in AI Configs help you define and monitor quality in production.

Kelvin Yap

Protect yourself from vibe coding errors

Is AI-written code 100% trustworthy? Get control over vibe coding errors.

LaunchDarkly

Where homegrown feature flag systems break

Homegrown feature flag systems work at the start, but runtime demands expose hidden risks.

LaunchDarkly

Why metrics for top engineering teams look different

Top engineering teams measure release health by looking at different metrics.

LaunchDarkly

Progressive Delivery

Mar 06

Core capabilities of a feature management platform

Good feature management tools provide much more than flags.

LaunchDarkly

Beyond feature flags: LaunchDarkly vs. other release management tools

Not all release management platforms are built for production-grade control.

LaunchDarkly

Why "free" tools aren't free

DIY tools are great at the start...

LaunchDarkly

A false sense of security: Guardrails don’t prevent incidents

Tools alone aren't enough.

LaunchDarkly

Introducing sequential testing for LaunchDarkly Experimentation

Sequential testing lets you adapt quickly and check results as you go.

Cameron Savage

AI-generated code ships fast, but runtime control hasn’t kept up

AI is speeding up code generation, but control in production is lagging behind.

Metric Data Sources: import multiple tables for warehouse-native experimentation

Bring your own warehouse tables and schemas to power experimentation

Eric Wang

Introducing LLM Playground for AI Configs

Test, compare, and trace LLM prompt and model variations before they reach production.

Kelvin Yap

LLM Evaluation: Tutorial & Best Practices

Learn how to properly evaluate large language models in various applications and contexts.

LaunchDarkly

Introducing stratified sampling for LaunchDarkly Experimentation

Support fair, reliable experiment outcomes by eliminating hidden sample bias.

Neha Julka

Meet the new navigation in LaunchDarkly

A cleaner, more focused navigation reduces noise and helps you move faster.

Sruthy Kumar

Creating better runtime control with LaunchDarkly and AWS

Ship bold AI changes without the guesswork.

Neha Julka

Introducing Audiences: See who your flags are really impacting

Instantly trace who saw your flag and what happened next.

Rachel Groberman

Online evals: LLM-as-a-Judge

Online evals in AI Configs give teams quality signals to successfully ship AI changes.

Kelvin Yap

Join us at AWS re:Invent 2025

Visit us at booth #1339!

Neha Julka

New Experimentation tools for PMs who test, learn, and move fast

Test, learn, and ship faster with new Experimentation tools.

Allison Rogers

Delivering adaptive AI with LaunchDarkly and Snowflake Cortex

LaunchDarkly & Snowflake enable AI delivery with real-time config and runtime safety.

Neha Julka

Less clutter, more control: Manage flag permissions at scale

Preset Role Scope and Flag Lifecycle Settings can help you issue cleaner, faster releases.

Bhargav Brahmbhatt

LaunchDarkly Tops G2 Grid for Feature Management

Read now