Ask HN: Why can't Codex/Claude compile app and test that changes worked?
Right now, both Codex and Claude makes changes based on your request but it's still you who needs to test those changes. Why can't Codex and Claude test them too?
For example, when making a change to the website, why can't Claude or Codex compile the changes, open a browser, test the changes to confirm that they worked?
It seems like the ability to verify is hugely important in autonomy. Yet, neither are doing it. What do you mean? Even Claude for Web can do the full compile edit cycle. It runs Debian and apt installs stuff on demand and can run anything its Debian has. I built a Chrome extension so Claude can get the full rendered DOM via a socket for web stuff. Why do you think it can’t? Are you perhaps using Claude Chatbot? Claude Code will run anything you instruct it to. I have to tell it to stop more than to start! So you had to build a Chrome extension for Claude so it can see its changes? Can Claude Code open Chrome to test its changes by default? No, not running chatbot. Running Claude Code CLI/Codex CLI. I'm back at my desk now, here's something from my current Claude session of it writing and running code. Are you saying yours cannot do this? It looks like it can execute scripts. How about browser testing to verify? I didn't have to no. It can run curl and get whatever HTML was served. But how else would Claude see the live DOM ? The only way is through something running in the browser at runtime. And yes, Google Chrome added an MCP into Chrome for doing it but I use Vivaldi. When coding manually how would you check your code was working when it came to serving HTML? Are you telling me your Claude / Codex cannot run curl to look at what was served? If you need help with something you can ask, no need to be brusque. Why can't it just open the changes in Chrome and check it? You mean like this? It can. For android, I have it dump screenshots and uiautomator xml dumps, and for web the playwright mcp. I find that a critical step in agentic development is to close the loop for the LLM so it can get direct feedback without needing you to manually handle it. It seems like Cursor can do this?
Nawin.Auth is running on 567. Let me write a quick test to capture exactly what bytes Nawin.Auth sends vs Crabbit (on 5670):
● Write(/tmp/compare_auth.py)
⎿ Wrote 83 lines to ../../../../tmp/compare_auth.py
1 #!/usr/bin/env python3
2 """Compare auth server responses byte-by-byte"""
3 import socket
4
5 def test_auth(host, port, name):
6 print(f"\n=== Testing {name} on {host}:{port} ===")
7
8 # Build AuthTreq request (141 bytes)
9 # type[1] + authid[28] + authdom[48] + chal[8] + hostid[28] + uid[28]
10 request = bytearray(141)
… +73 lines (ctrl+o to expand)
● Bash(sudo lsof -i :5670 2>/dev/null | head -3)
⎿ COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
run-auth- 95646 root 9u IPv4 139735 0t0 TCP \*:5670 (LISTEN)
● Bash(python3 /tmp/compare_auth.py 2>&1)
⎿ === Testing Nawin.Auth on 127.0.0.1:567 ===
Request (141 bytes):
Hex: 01676c656e6461000000000000000000000000000000000000000000006e6177696e00000000000000
… +53 lines (ctrl+o to expand)
● Both send 145 bytes, same format! Differences are only in the random session keys (bytes 64-72 and 136-144). The ticket encryption is identical.
chrome --headless --dump-dom "https://example.com" > dom.html