Settings

Theme

Show HN: FailCore – Execution-Time Safety Runtime for AI Agents

github.com

1 points by IntelliAvatar a month ago · 1 comment · 1 min read

Reader

Hi HN,

FailCore is a small execution-time safety runtime for AI agents.

Instead of relying on better prompts or planning, it enforces security at the Python execution boundary: blocking SSRF, private network access, and unsafe filesystem side-effects before any tool side-effects occur.

I added a short live demo GIF in the README showing it blocking a real tool-use attack, along with a DESIGN.md that explains the architecture and trade-offs.

GitHub: https://github.com/zi-ling/failcore Design notes: https://github.com/zi-ling/failcore/blob/main/DESIGN.md

Feedback welcome — especially thoughts on runtime hooking vs. kernel-level approaches like eBPF.

IntelliAvatarOP a month ago

One clarification that may help set expectations:

FailCore is intentionally not an agent framework, planner, or sandbox. It sits strictly at the execution boundary and focuses on two things: 1) blocking unsafe side effects before they happen 2) recording enough execution trace to replay or audit failures later

The goal isn’t to make agents smarter, but to make their failures observable, reproducible, and boring.

If people are curious, the DESIGN.md goes deeper into why this is done at the Python runtime level instead of kernel-level isolation (eBPF, VMs, etc.), and what trade-offs that implies.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection