Show HN: A live Python REPL with an agentic LLM that edits and evaluates code
I built PyChat.ai, an open-source Python REPL written in Rust that embeds an LLM agent capable of inspecting and modifying the live Python runtime state.
A sample interaction:
py> def succ(n):
py> n + 1
py> succ(42)
None
ai> why is succ not working?
Thinking...
-> Listing globals
<- Found 1 globals
-> Inspecting: succ
<- Inspection complete: function
-> Evaluating: succ(5)
<- Evaluated: None
Tokens: 2102 in, 142 out, 2488 total
The function `succ` is not working because it calculates the result (`n + 1`) but does not **return** it.
In its current definition:
```python
def succ(n):
n + 1
```
The result of the addition is discarded, and the function implicitly returns `None`. To fix it, you should add a
`return` statement:
```python
def succ(n):
return n + 1
```
Unlike typical AI coding assistants, the model isn’t just generating text — it can introspect the interpreter state and execute code inside the live session.Everything runs inside a Rust process embedding the Python interpreter, with a terminal UI where you can switch between Python and the agent via <tab>.
It’s very much a prototype, and definitely insecure, but I think the interaction model is interesting and potentially generalizable.
Source: https://github.com/andreabergia/pychat.ai
Blog post: https://andreabergia.com/blog/2026/02/pychat-ai/ > it can introspect the interpreter state and execute code inside the live session. So you could presumably also set it up to implement its suggestions (i.e. if it "executes" its proposed definition for `succ` that would actually just update it in the global namespace). Fun stuff. > a terminal UI where you can switch between Python and the agent via <tab>. I'd suggest rethinking this; people usually expect the tab key to auto-complete. At any rate, thanks for writing the description yourself. The usual LLM-generated copy for these kinds of projects is really obnoxious. > So you could presumably also set it up to implement its suggestions (i.e. if it "executes" its proposed definition for `succ` that would actually just update it in the global namespace). Fun stuff. Yep, I've had that working in some earlier version of the tooling. > I'd suggest rethinking this; people usually expect the tab key to auto-complete. That's a fair point. > At any rate, thanks for writing the description yourself. I might let an LLM write the code, and even the user's manual, but I write my own blog post and comments. :-) Very cool idea. You can see the applications to debugger interfaces as well, not just REPLs. Very cool. How about using caps lock for switching modes? Or vim-style?