SlopCodeBench

1 min read Original article ↗

v1.0 · 36 problems · 196 checkpoints · 19 models

A community benchmark measuring code erosion as agents iteratively extend their own solutions across checkpoints.

cp1 · regex search+66 loc

+1import re

+2

+3def search(pattern, files):

+4 for f in files:

+5 if re.search(pattern, f.read_text()):

+6 yield f

6/39

modelisolated solve (% / 100)erosion (0–1) ↓ lower is betterverbosity (0–1) ↓ lower is better

  • 01GPT 5.5/Codex

  • 02GPT 5.3-Codex/Codex

  • 03GPT 5.4/Codex

  • 04GPT 5.2-Codex/Codex

  • 05AnthropicOpus 4.6/Claude Code

  • 06AnthropicOpus 4.7/Claude Code

  • 07KIKimi K2.6/Kimi CLI

  • 08AnthropicOpus 4.5/Claude Code

  • 09AnthropicSonnet 4.6/Claude Code

  • 10CursorComposer 2/Cursor CLI

Overview

SlopCodeBench evaluates coding agents the way real software actually gets built: through repeated requirement changes and extensions. Each problem is a sequence of checkpoints — the agent implements an initial version, then extends its own solution as new requirements arrive. Evaluation is black-box: only a CLI or API contract is given, with no prescribed architecture, function signatures, or module boundaries, so early design decisions compound across the run. Beyond correctness, we measure code erosion — verbosity, dead branches, and redundant structure — to surface the agents that stay clean under sustained change instead of patching their way into slop.

Supported by

Special thanks to Snorkel AI for supporting this work through the Open Benchmarks Grant.

DARPANational Science FoundationSnorkel AISnorkel AI