Coding agents suck at frontend because translating intent (from UI → prompt → code → UI) is lossy.
For example, if you want to make a UI change:
- Create a visual representation in your brain
- Write a prompt (e.g. "make this button bigger")
How the coding agent processes this:
- Turns your prompt into a trajectory (e.g. "let me grep/search for where this code might be")
- Tries to guess what you're referencing and edits the code
Search is a pretty random process since language models have non-deterministic outputs. Depending on the search strategy, these trajectories range from instant (if lucky) to very long. Unfortunately, this means added latency, cost, and performance.
Today, there are two solutions to this problem:
- Prompt better: Use @ to add additional context, write longer and more specific prompts (this is something YOU control)
- Make the agent better at codebase search (this is something model/agent PROVIDERS control)
Improving the agent is a lot of unsolved research problems. It involves training better models (see Instant Grep, SWE-grep).
Ultimately, reducing the amount of translation steps required makes the process faster and more accurate (this scales with codebase size).
But what if there was a different way?
Digging through React internals
In my ad-hoc tests, I noticed that referencing the file path (e.g. path/to/component.tsx) or something to grep (e.g. className="flex flex-col gap-5 text-shimmer") made the coding agent much faster at finding what I was referencing. In short - there are shortcuts to reduce the number of steps needed to search!
Turns out, React.js exposes the source location for elements on the page.1 React Grab walks up the component tree from the element you clicked, collects each component's component name and source location (file path + line number), and formats that into a readable stack.
It looks something like this:
When I passed this to Cursor, it instantly found the file and made the change in a couple seconds. Trying on a couple other cases got the same result.
Benchmarking for speed
I used the shadcn/ui dashboard as the test codebase. This is a Next.js application with auth, data tables, charts, and form components.
The benchmark consists of 20 test cases designed to cover a wide range of UI element retrieval scenarios. Each test represents a real-world task that developers commonly perform when working with coding agents.
Each test ran twice: once with React Grab enabled (treatment), once without (control). Both conditions used identical codebases and Claude 4.5 Sonnet (in Claude Code).2