Show HN: AgentLint – ESLint for your coding agents
github.comI’ve been spending a lot of time with coding agents lately. Across Claude Code, Cursor, OpenCode, Codex, and different models, I kept noticing that some people were getting much better results from the same tools. It became clear that this was not just about prompting.
A big part of it was context drift. AGENTS.md, skills, rules, and workflows looked fine, but were no longer aligned with the code.
I also learned that more context does not always help. Sometimes it adds noise and wastes tokens. The recent AGENTS.md paper also pushed me to think harder about this, especially around auto-generated context files and /init-style workflows.
Then I saw Microsoft’s writeup showing a jump from 38.1% to 69% after improving instruction setup. That made me take these files much more seriously.
AgentLint came out of that. It’s a small CLI that scans the repo and helps keep context files aligned. After setup, MCP handles most of the ongoing flow.
Give it a try: npx @agent-lint/cli
http://samilozturk.github.io/agentlint
Would really appreciate any feedback or criticism. Oh this is really useful. There's definitely a problem to be solved here. agent guidance files, like all forms of documentation, can quickly grow stale. I've tried to tackle a similar problem with a couple different approaches. One is a command I call "/retro" which basically goes through all recent history on a project - commits, prs, pr comments, etc, and analyzes the existing documentation to identify how to improve it in ways that would prevent any observed issues from happening again. This is less about adding structure to the docs (as AgentLint does) and more about identifying coverage gaps. The other is a set of tooling I've built to introduce multiple layers of checks on the outputs of agentic code. The initial observation was that many directives in CLAUDE.md files can actually be implemented as deterministic checks (ex: "never use 'as any'" --> "grep 'as any'"; by creating those deterministic checks and running them after every agent turn, I'm able to effectively force the agent to retain appropriate context for directions. The results are pretty astounding - among early users, 40% of agent turns produce code that doesn't comply with a project's own conventions. The system then layers on a sequence of increasingly AI-driven reviews at further checkpoints. I'd love feedback: http://getcaliper.dev This is very cool and I love the website design alone. Did you build that with AI as well? Would love to hear the process. Author here. this sounds a little AI-generated :) but thank you, appreciate it.
And yes, I built the landing page with AI after the actual project, MCP, and CLI had already matured.