Settings

Theme

Ask HN: Typical tech job interview in late 2025?

2 points by dakiol a month ago · 5 comments · 2 min read

Reader

A year ago or so, I went through the "classic" tech interview. Not faang, but not an unknown company either, so something one or two levels below faang. Good pay, lots of senior+ engineers, and a tough environment where you can’t really slack off (they call it "challenging").The process was the usual:

* Intro call with a recruiter to get to know you and all that crap

* Live coding or a take-home assignment (plus a follow up to explain the code). No AI or Googling allowed

* System design interview. Again, no AI or Googling

* Interview with an engineering manager. Behavioral interview questions, same rule: no AI

* Team/culture fit

Now I’m wondering how interviews look today. I use LLMs for about 50–70% of my projects (mostly greenfield ones), and they’ve become just another tool in my workflow; like CI/CD, testing, datadog, or debuggers, to name a few. So I’m not sure if I should prepare for interviews like before, or start integrating LLMs into my prep.

It feels odd to imagine a live coding interview with an LLM tbh, where I’d have to pretend to think through the problem while basically guiding the model toward the solution. In practice, my process is more trial and error, almost brute force, but it works nice, kind of like sculpting stone. Like I don't think anyone would judge too harsh on the way you use debuggers, as long as you get the job done... I have the same feeling about how one uses LLMs, but since they are relatively new, I guess one needs to fake how the usage goes (so that one passes the interview).

Thoughts?

akshaykokane 22 days ago

I have been seeing the same thing. Teams are confused right now because interviews still measure 2015-era skills, while day-to-day work requires 2025-era AI collaboration skills. Most companies either ignore LLM usage completely or try to forbid it in interviews, even though developers will use it 50%+ of the time on the job.

One interesting direction I’ve been exploring is evaluating candidates on how they think with AI, not whether they avoid it. Things like debugging AI-generated code, verifying assumptions, identifying hallucinations, choosing when to trust the model, etc. These are the actual bottlenecks today, not LeetCode puzzles.

We built an internal tool that looks at this “cognitive intelligence” part instead of raw coding speed, and the signals have been much more predictive than traditional interviews. I feel like more companies will eventually move toward this kind of evaluation because banning AI in interviews makes less sense every day.

luponius a month ago

Had interviews last year insisting the use of llms and others tolerating it. Our head wants to introduce codex in ohr workflows now so pretending you're not using them or joining a team that swears off them better have a very good reason I suppose?

  • dakiolOP a month ago

    Yeah exactly. I'm using codex, btw. So I feel weird to pretend I'm not using LLMs and I write all code just by using my brain. But on the other side, there's no much point on explaining one-self on how LLMs are used to do a task... like, it would look very ridiculous to share my screen and ask 90% of the solution to the LLM while the interviewer just looks at LLM output... that's like analyzing how one uses Google to search for stuff (and I swear that 100% of the engineers out there use Google to search for stuff related to coding, but I haven't heard of any tech interview that includes a session to asses your Google skills, right?)

    So, if we are not pretending, and companies want people who can use LLMs, well, I think it's rather clear: No more live coding interviews, no more live system design interviews. You can just send take-home assignments because people WILL use LLMs to solve them. You just analyze the best solution offline and take the best.

    If any the only "live" interview needed is: are-you-a-real-person-and-not-an-asshole?

rekabis a month ago

Every corporation I have interfaced with over the last few months has demonstrated massively epic levels of FOMO over AI.

And yet, when I ask them how they are tracking AI’s effectiveness, especially with regard to degrading skill sets, lowered creativity/effectiveness in solutions to complex/edge problems, slowed dev velocity, and increasing levels of needless code complexity (with associated ballooning of LoC) and gratuitous hallucinations breeding bugs like meal worm farms, all I get are crickets. Or worse, deer-in-the-headlight looks. They’re all wildly unaware of the downsides that are slowly being confirmed by science.

Frankly, I feel that I am lucky that I’ve chosen a sabbatical to deal with my parent’s EoL issues. The chance that this will extend into the popping of the AI bubble appears to be non-trivial. By the time I start looking in earnest again, AI might not be a critical employment benchmark anymore.

Or one of my projects will become profitable and I won’t have to deal with all that bullshite.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection