AI assisted interviews - Slava Akhmechet

2 min read Original article ↗

Over the past few months I’ve been running an experiment. When a candidate gets stuck solving an interview problem, I tell them they can use Claude or ChatGPT or any AI of their choice.

I found that candidates who were already doing well end up doing well with AI, but candidates who were already failing don’t do any better. AI assistance doesn’t help weak candidates. The correlation is very strong, like r=1.

When strong candidates get stuck on a problem they stay calm, form precise hypotheses, then systematically test one hypothesis at a time until they zero in on a solution. If you tell them to use AI they go through this same process faster. They form precise prompts, read responses carefully, then adjust their understanding of the program. They repeat this process until they find a solution.

When weak candidates get stuck, they blindly try stuff hoping something sticks. If you tell them to use AI, their process stays sloppy. For example, they’ll try a vague prompt like what’s wrong with this program?”, then take the response and without fully understanding it paste it back into the IDE, hoping it somehow works.

I thought AI would help weaker candidates perform better on interviews. You’re stuck on a problem, it’s a high stress situation, and now you can rely on an AI collaborator to systematically understand and solve the problem together. But that’s not what happens. Instead, AI amplifies people’s existing patterns.

This might change in the future when agents are smart enough to save you from shooting yourself in the foot. If the current rate of progress persists that might happen sooner than we think. For now clear thinking is still king.

Jan 28, 2026