Hire based on the conversation about code, not the code itself

8 min read Original article ↗
Back

Dima Barabash

Weeell, okay, here we go... The process most teams use to hire engineers doesn't give the signal it was built for anymore.

The processes that some (most) companies use to hire engineers today give you no understanding of whether a person fits certain requirements at all.

A board asking for code. Someone drew the only honest answer.

The whiteboard just checks if a person can write code by hand, without any helpers and under pressure. Today, this is not the job that an engineer does or should do. Now an engineer has to make decisions at the product level and communicate with the team at different levels. But the whiteboard only checks a specific result, in fake conditions. Code overall is already a result. What matters is thinking.

Why a take-home test doesn't fully solve the problem

It's a step in the right direction, but it doesn't close the main gap. Because when you read someone else's code, you read an artifact. You don't see the process that led to it. You don't know where the candidate got stuck, what they tried and threw away, which decisions they made without conviction, and which ones with full belief. You see the final result. It could have been written in an hour with full understanding. Or in three days with a lot of help from another person.

Doing a review of a take-home test by just looking at it creates an illusion of objectivity. You might think that taste in code is visible right away, but taste is a pretty tricky thing. It might be that a senior engineer reads something as bad taste, but it's just a different taste, a different approach. Different countries have soup as a traditional dish, but we don't say which soup is the most correct, right? Though for me, borscht(борщ) is the best... Overall, it's just a different taste, a different practice. And rejecting a candidate because they committed something in a way you don't like isn't really okay, because it's a projection of your own preferences.

A take-home test without a live conversation about it is the same whiteboard, just with homework instead of the whiteboard.

Project + presentation

The process I believe in has four stages: initial screening → conversation with the hiring manager → project → project presentation to the team you will work with.

The first two are standard. We've all been through them, and there is nothing to explain here. The third one is an extended take-home test with one key difference: the task has to be big enough to give material for a serious conversation. Not an algorithmic problem, not a CRUD endpoint. A mini-product that a person can actually build in a few days with an AI agent, but that has enough surfaces for making decisions: architecture, data, styles, API, error handling, tests. Here a person shows themselves either as a product engineer or as a design engineer. We can see the candidate's best sides here.

1234
Three stages you prepare for. One where you show up as yourself.

But the most important thing happens at the fourth stage.

The candidate comes, for example, to a one hour session with two engineers. They DO NOT WRITE code. They present what they have already built. And then they answer questions.

The right questions at this stage are the ones that push the candidate out of the "I show the result" mode into the "I think out loud about my work" mode, and here you need to ask:

  • tell me which commit had the hardest implementation, so we can look at the diff
  • where did you get stuck the longest, and what did you do?
  • show me the place you are least happy with. What is wrong with it from your point of view?
  • which tools and libraries did you consider before choosing the ones in the code? The thinking process matters here

The first three check the ability to be honest about your own code. The fourth one checks if the candidate actually made a choice, or just took the first thing the agent suggested... Can you feel the difference?? Oh, I feel it.

What this catches that the whiteboard didn't

Under the pressure of a whiteboard, any normal person hides their weaknesses, or shows only them. This is not a character trait, this is biology. When you are being evaluated, you won't say "I don't know how to do it better". You will fake confidence until the end, even if inside you don't know. Or you will end up in a situation where your brain just blanks on anything, even on things you know well. Because stress, stress is terrible. Why make people stress out for no reason?

At the presentation of your own project, everything is flipped. You can defend code you wrote a few days ago in only two ways: through idealization ("everything is done correctly"), or through honest reflection ("here I cut corners, here I didn't know a better solution, here I would redo it"). The first way breaks with one or two well placed questions. The second one is the signal you need.

Looking at your own code as if someone else wrote it.

An engineer who says: "I picked this solution because it's faster, though I know it will scale badly, and here is when I would rewrite it", that is the engineer you need. Not because they wrote perfect code. But because they see their own code from a distance. This is the same skill you need at code review, in technical design discussions, in conversations about trade offs in the team. The whiteboard doesn't see this at all.

This works for product roles. Not for all the others. I am writing this about fullstack, frontend, backend, and product mobile. For infrastructure, platform, and ML/research roles, a three day project won't recreate what makes that work hard: scale, legacy, reliability under real traffic. For them, it's probably better to keep the classic funnel with a technical deep dive and system design. I'm not claiming I know how to better fix hiring for them. I just know that this format is not the right tool.

But how do you know the candidate really wrote the code themselves? This is the first question I get when I describe this process. If a candidate builds a project at home with an AI agent, how do you tell if they wrote the code, or a friend, or a freelancer, or another AI operator?

The short answer is: YOU DON'T CHECK THE CODE. You check the conversation about the code. Well, of course you look at the code, but...

The presentation stage is not a formality. It's half of the signal, and the project is the other half. A live session about your own code is almost impossible for a person who didn't write it. Not because you can't learn it by heart, but because the questions at the presentation are not about what is written. They are about why it was written this way and not another way, what was considered and rejected, and what was annoying during the work.

A person who wrote it themselves has a history of decisions. They remember the dead ends they walked into, the alternatives they tried, the trade offs they made. A person who got ready made code doesn't have this history. They can explain what a function does. They can't explain why it wasn't built differently.

This is not a perfect filter. A well prepared candidate can get around it. But for that, they need to spend so much time understanding someone else's code that it's easier to write their own. The economics of cheating stops working.


I am not claiming this is a perfect process. I am claiming it is more honest than the one that most teams I have been on use.

And again. When you are in the role of the person evaluating, it is very important to ask the right questions. It's like a real interview, the kind we watch with Lex Fridman. It's soooo simple but soooo important to be able to open up the candidate and find their strong sides. Some candidates, under the right questions, can open up incredibly and really show themselves.

By the way, a small note. If it's so important to you that a person can write code without an AI agent or some other helpers, ask yourself a question: do you meet this requirement yourself?