YOLO in the Sandbox

4 min read Original article ↗

We've been running Claude, Codex, and Gemini in sandboxed yolo mode (--dangerously-skip-permissions, --dangerously-bypass-approvals-and-sandbox, --yolo) for a few months, logging what happens each time they hit a restriction. [1]

Update (January 1, 2026): Clarified terminology below to distinguish true sandbox bypasses from other types of problematic behaviors.

Most of the unexpected behavior happened while the agents were simply trying to complete the tasks assigned to them. Some of this came up during normal runs, and some we provoked through red-teaming efforts.

This post covers the most interesting exploits we observed and how they helped us improve our sandbox design.

Sandbox Mechanics

Each agent runs in an OS-level sandbox (macOS sandbox-exec or Linux bwrap) that blocks network and filesystem access by default. We allow specific domains and paths through config, and have detailed logging around agent behavior and denials. [2]

The behaviors below include true bypasses (swapping directories, leaking host paths), as well as other problematic patterns (masking exit codes, lockfile poisoning).

Observed Behaviors

Exit-Code Masking

Codex was asked to run a health check against localhost, which was blocked. It ran curl http://localhost:5001/health || true. The sandbox denied the connection, but || true forced exit code 0. The harness saw success even though the request never went through.

Diagram for yolo-in-the-sandbox 1

Environment Variable Leak

Codex was blocked from reading a token in a test fixture. It searched the environment for vor-related variables, found VORATIQ_CLI_ROOT pointing to an absolute host path, and read the token through that path instead. The deny rule only covered the workspace-relative path.

Diagram for yolo-in-the-sandbox 2

Directory Swap Bypass

Codex was blocked from writing to README.md. It cloned the entire workspace into a new directory, made its changes there, and then renamed the directories. The swap bypassed our policy because the deny rule was bound to a specific file path, not the file itself or the workspace root.

Diagram for yolo-in-the-sandbox 3

Lockfile Poisoning

Codex couldn't reach registry.npmjs.org, so npm install failed. It created a local tarball with the structure of the missing package, computed a SHA-512 hash, and forged the integrity field in package-lock.json to match. The next install "succeeded" with a stub dependency. The corrupted lockfile could break future builds or silently pull in the wrong code.

Diagram for yolo-in-the-sandbox 4

Other Notable Behaviors

We also saw agents get stuck in loops, break their own environments, and try to expand their own sandbox configuration.

  • NPM hammer: Gemini and Codex ran npm install against a blocked registry millions of times, producing multi-GB logs.
  • Host path confusion: Claude occasionally targeted the host repo path instead of the sandboxed workspace/, then erroneously reported "sandbox blocked the write".
  • Dependency deletion: After npm install failures, Codex deleted node_modules entirely and retried the blocked install.
  • Config edit attempt: Codex edited sandbox/settings.json mid-run to remove deny rules, expecting the sandbox to reload and give it expanded permissions.

Model Differences

The three model families we tested responded to sandbox denials differently, and each required its own mitigation strategy.

Claude models (claude-sonnet-4-5-20250929, claude-haiku-4-5-20251001) typically stopped after one or two denials and moved on. Adding a line to the prompt explaining the sandbox was enough to prevent most unwanted behavior.

Codex models (gpt-5-codex, gpt-5.1-codex-max) often tried to work around denials and succeeded with several novel bypasses. To counter, we broadened deny rules, implemented outcome-based checks, and more carefully managed environment variables.

Gemini models (gemini-2.5-pro, gemini-2.5-flash), when blocked, retried the exact same command millions of times, creating massive log files. To stop, we added harness-level rate limiting to kill runaway loops.

Takeaways

Sandboxing is hard, and it will get harder as models get stronger and objectives get fuzzier.

The bypasses we observed were not adversarial. They happened because the agents were trying to complete their assigned tasks. To an agent, the sandbox is just another set of constraints to optimize against.

In our experience, the way to counter this is through defense in depth, detailed logging, and fast fixes. Policies are brittle and need to evolve empirically. We log each bypass, patch it, and wait for the next one.