Settings

Theme

The Paranoid Guide to Running Copilot CLI in a Secure Docker Sandbox

gordonbeeming.com

59 points by pploug a month ago · 15 comments

Reader

foreigner a month ago

I recently started using Catnip (https://github.com/wandb/catnip) for this. Catnip also automatically manages multiple Git worktrees, and has a responsive UI for mobile.

jaytaylor a month ago

This is a really neat project .

At my company (StrongDM) we recently open-sourced a tool in this space called Leash: https://github.com/strongdm/leash

By default it runs in docker, and also includes an extra sophisticated macOS-native --darwin mode which goes beyond the capabilities and guarantees of the likes of sandbox-exe, bubblewrap, and in some ways docker. Leash provides visibility into and control over every command and network request attempted by the coder agent. Would appreciate any feedback, and will try to get in touch with the author (Gordon).

Now I'll definitely look into automatically supporting pass-through auth for at least gh cli in Leash - always looking for what folks will find useful.

udev4096 a month ago

Docker is not a sandbox, IT'S NOT! If you must, use gvisor or kata runtime for actual sandboxing

  • NitpickLawyer a month ago

    Eh. While you're technically correct, there's a lot of nuance here. The threat model of running agents isn't one that needs "actual sandboxing". You're not looking to run malware that is purposefully designed to escape docker/podman. You're mainly looking to prevent the agent running silly rm-f's, or touch files outside its working env, or killing arbitrary processes, or mess up installed software. That's pretty much it. Some network control as well. ALl of these can be achieved with docker.

    • elaus a month ago

      It seems plausible that an agentic AI will notice that it's running in a Docker container while debugging some unexpected issues in their task and then tries to break out (only with good "intentions" of course, but screwing things up in the process).

      Claude or Gemini CLI absolutely will try crazy things after enough cycles of failed attempts of fixing some issues.

      • anonzzzies a month ago

        They absolutely will, but a non-root user inside docker so far, even when asked, did not result in any damage outside the the docker container. With root it managed to break things, but as user it did not find a way. When I asked it to try more 'fishy' things, codes + claude code both refused; after prompting some more 'but we are testing a security tool ' etc, it just tried very meek things that did not manage to do anything.

        • fulafel a month ago

          Sounds like the real sandbox in this scenario is the alignment training of the LLMs you tried.

  • pyuser583 a month ago

    Could you expand on this?

psidium a month ago

I like this. I have crafted a Claude Code docker container to similar effects. My problem is that my env has intranet access all the time (and direct access to our staging environment) and I don’t want a coding agent that could go rogue having access to those systems. I did manage to spin up an iptables based firewall that blocks all requests unless they’re going to the IPs I allowlist on container start (I was inspired by the sandbox docs that Anthropic provides). My problem right now is that some things that my company use are behind Akamai, so a dig lookup + iptables allow does not work. I’ll probably have to figure out some sort of sidecar proxy that would allow requests on the fly instead of dig+iptables.

BimJeam a month ago

I use incus for these type of things. Comes with advantages as passing through gpu as well.

codazoda a month ago

I built a similar container when working on a CTF that didn’t exclude the use of AI tools.

https://github.com/codazoda/llm-jail

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection