How is AI changing interview processes? Not much and a whole lot.

23 min read Original article ↗

EDIT 2: This just in on October 8, 2025! Google just wrapped up a pilot of in-person SWE interviews at their biggest eng sites (Bay Area, Seattle, NYC, Poland, Bangalore). It sounds like it'll be 2 virtual interviews followed by 3-4 in-person interviews. The plan is to roll it out "aggressively" for all SWE roles in the next 2-3 months.

EDIT: This piece was written up in The Pragmatic Engineer newsletter as well.

A year and a half ago, we published a blog post called, "How hard is it to cheat in technical interviews with ChatGPT?" In that post, we talked about our experimental findings: if companies ask LeetCode questions verbatim (or even LeetCode questions with a small twist), cheating is actually quite easy. Only fully custom questions were largely impervious to cheating.

As a result, we predicted that the technical interview would have to change and that companies would move away from asking verbatim LeetCode questions and instead invest in creating much more customized questions, out of necessity. Despite that prediction, we bet heavily that, even if their content and format would change, algorithmic interviews were here to stay.

Now, we’re starting to see some movement, and lately, technical interviews have gotten a lot of attention in the press. As is often the case with the media, it’s largely clickbait. And it implies that companies (like Meta and Anthropic1 in particular) are now allowing AI in their interview processes. Of course, many engineers wishfully interpret these headlines to mean that algorithmic interviews are going away… but are they really?

And what else is changing? Will we return to in-person interviews? Will questions get harder? How rampant is cheating, and what are companies doing about it? If candidates can now use AI in interviews, what will these new interview types look like? And how does all of this differ between FAANG & FAANG+ companies and startups?

Finally, and most importantly, do we get to say “I told you so!”??

To figure this all out, we surveyed our pool of interviewers (all of whom hail from FAANG and FAANG-adjacent companies) and some of our users (who conduct interviews at their respective companies). Respondents primarily came from FAANG, with a few from FAANG-adjacent companies like Stripe and Uber.2

To supplement the survey responses, we also spoke to Amanda Richardson, the CEO of CoderPad and Ilya Tilis, VP of Product at Codility to get a feel for how their customers are integrating AI into interviews and assessments.

At the FAANGs, and especially at Meta, some stuff is changing. But, at the FAANG-adjacents, very little is changing at all!3Here’s how a few of our FAANG-adjacent interviewers put it (though most just put “N/A”), when we asked them how AI has impacted their processes:

My company is moving slowly, despite numerous incidents of cheating and other issues being raised.

Honestly, not much has changed on the interviewer side at my company. I do pay close attention to whether candidates seem to cheat, [but] that’s about it.

This is a bit surprising because the FAANG-adjacents are smaller and, at least in theory, more nimble than their FAANG cousins. But, whether we like it or not, the FAANGs have historically set the tone and structure of technical interviews for our industry, and everyone else has followed suit. In the ’90s, it was Microsoft. Today, it’s probably Meta. (I'll explain why Meta will likely lead AI adoption for technical interviews later in the post.) As one of our interviewers said in our survey:

I believe whatever precedent that is set with Meta's exploration of allowing AI during an interview will be adopted by big tech.

So what’s changing at the FAANGs?

  • Algorithmic questions are here to stay, at least for now
  • Cheating (or at least the perception of it) is a real thing
  • Even though algorithmic questions aren’t going away, many companies have changed which questions they ask, and many interviewers are changing HOW they ask those questions
  • The biggest process change at the FAANGs is cheating detection software (but not all FAANGs use it)
  • Meta will be rolling out an AI-assisted interview as part of their onsite next quarter, but it will be part of the onsite and will NOT replace the algorithmic phone screen
  • It is unclear if there will be a return to in-person interviews

Startups, however, are a different animal entirely. They are making all sorts of AI-driven changes to both their questions and their processes! I’ll talk specifically about startups at the end of this post.

Although there isn’t a ton of change yet at the FAANG+ companies, the changes we have seen are heartening. Many years ago, an interviewer I met at Twitch said something that has turned into my favorite quote about interviewing: It’s the interviewer's job is to see if he and the candidate can be "smart together.”

That’s it. That’s what a good interview is.

These changes will hopefully get our industry back on track, away from the arms race between candidates memorizing more and more LeetCode questions and companies raising their bar and expecting candidates to quickly regurgitate those answers.

Now, let’s dive into the details!

Methodology

We sent a survey to our roster of interviewers and a portion of our users. We got 67 responses, primarily from FAANG and FAANG-adjacent companies but also from a few startups.

In the survey, we asked respondents about how interview processes and questions at their current company had changed with the advent of AI. We also asked about the prevalence of cheating in interviews and what changes to interview processes they expected to see in the future.

We cautioned respondents to reply purely based on their own first-hand experience, rather than anything they might have heard from others or seen in the press.

Algorithmic questions are here to stay—for now

First, the big reveal. Of the 52 respondents from FAANG, ZERO said that their company has moved away from algorithmic questions.

However, over half of respondents said that in 2-5 years algorithmic interviews will not be as prominent as they are now. Nevertheless, about 20% of respondents remain convinced that algorithmic interviews will never go away.

Cheating (or at least the perception of it) is real, and some companies are investing in cheating detection tools

About a third of FAANG interviewers said that they’ve actually caught someone cheating during an interview, and 81% said that they suspected that candidates are using AI to cheat.

So, are companies responding by investing in cheating detection tools?

Yes. In fact, when we asked FAANG interviewers whether AI had meaningfully changed the interview process at their companies, the only process change any of them listed was adoption of cheating detection tools. But it still isn’t that common—overall, only 11% of FAANG interviewers said that their companies now use cheating detection software. But pretty much all of those interviewers (except one from Microsoft) came from Meta.

And Meta does indeed appear to be going all-in on cheating detection. As one of our Meta interviewers said:

[Cheating prevention] is pretty front-and-center at Meta right now. We now have to mark whether we suspect a candidate of cheating across nearly all interview types/levels (previously [it] was only coding interviews) and provide justification if so. We also are requiring candidates to share their entire screen and turn off all background filters (including blur) in most interviews as well.

I also spoke to Amanda Richardson, CEO of CoderPad, for more context about cheating detection and prevention. CoderPad arguably leads the industry in technical interview tooling and has Meta, select Microsoft teams, and many FAANG-adjacent companies among its customers. Amanda confirmed that CoderPad has enabled cheating detection (read: monitoring for suspicious activity like copy/pasting or leaving a browser tab) by default for all customers.

Despite that, NONE of the FAANG-adjacent engineers we surveyed said that they use cheating prevention tools. One went so far as to say:

My company is moving slowly, despite numerous incidents of cheating and other issues being raised.

What’s the disconnect? Maybe interviewers are unaware of the cheating detection or simply disable it, but most likely, it’s too early to have an official policy across all teams, especially in companies with a high Chaos Score.

Though we expect cheating detection to become more standard in the coming months, the main way that FAANG+ companies currently combat cheating is by adjusting their questions and changing how they ask those questions.

Interview questions are changing (and may be getting harder)

Despite algorithmic questions not going anywhere, 58% of FAANG interviewers said that they have adjusted the kinds of algorithmic questions they ask. Here’s what’s changed, in interviewers’ own words4:

Meta:

I see AI usage as very similar to people who memorize LeetCode questions—the skillset I am looking for when interviewing is not memorization of a popular question, so asking a slight variation of a popular question is a very easy way to catch this. More open-ended questions which probe thinking, rather than [applying a] known pattern “X”. [Questions where the description] is much longer, [and the] setup is much more complicated.

Microsoft:

We choose questions that require the candidate to explain concepts fluently and coding problems that are more complex where LLM solutions often include “giveaways.” Variants of LeetCode questions. Or simply testing how they can expand on existing logic using class structure instead of re-writing the entire logic. More focus on systems-level understanding, more complex/practical implementation questions, [hand-picked] to trigger overly specific/rare and wrong responses by LLMs.

Amazon:

I make variants of problems to try to fool potential AI use (and to also go around people that memorize problems). Less copy paste [from] LeetCode, more thinking based [on the] same concept. Whatever I choose it won't be straight from LeetCode.

Google:

No LeetCode-style [questions, and instead,] questions with many sub-parts [which] are problem solving heavy instead of coding heavy. Different terminology [and an] abstraction layer which AI tools haven't been trained on. Likely something that requires 2+ data structure/algorithm techniques. Focused on more abstract ideas that aren’t easily tied to one answer. Questions with multiple solutions.

Roblox:

Since you can get [the] answer to most [LeetCode] questions immediately, we make sure that question cannot be solved by standard chatbot.

In our minds, anything that moves the industry away from forcing candidates to memorize questions is a very good thing. Of course, companies don’t improve interview processes just because it's the right thing to do. They only do it out of necessity, and cheating is the forcing function for making that happen.

Unfortunately, many companies still rely on taking a LeetCode question and modifying it with a small twist. In our cheating study from a year and a half ago, we saw that small twists are not enough—there was no significant difference in interview outcomes between candidates who cheated on verbatim LeetCode questions and lightly modified LeetCode questions.

If you really want your questions to be cheat-proof, you have to do the work to create custom questions yourself. We'll talk about how to do that in the appendix at the end of this post.

At this point, you might be wondering if interview questions are also getting harder. The answer appears to be, “Not so much.” Only 21% of FAANG interviewers admitted to deliberately asking harder questions. However, even if the questions themselves aren’t deliberately harder, our data shows that the bar for what constitutes a successful interview has indeed gone up over the past few years.

Interviewers are also changing how they ask questions

One of the more subtle changes we saw in the survey wasn’t about which questions were being asked. Rather, for about a third of FAANG+ interviewers we surveyed, the change was to how they were asking questions.

In their own words…

Meta:

I look for patterns in how [the] candidate approaches the solution. The AI has a certain pattern, [and] if the solution looks more like AI generated, I poke the candidate. When assessing a candidate, [I] not only look at if they can give a correct answer but if they actually know what they are doing. [I’m far] more focused on the WHY than the HOW. Also [I] try to ask some follow-ups with different parameters/constraints to quickly assess adaptability.

Microsoft:

[I ask] more fundamental follow up questions about basic data structures they've used in the problem to test if they know the data structure or just how it’s used in the problem. More granular [follow-up] questions like "what does this line of code do"/"What would happen to this function if we remove this line" and more curveballs, like “can you do it a different way?”

Amazon:

Deeper understanding of the topics/subjects. Not just knowledge [of] facts. More focus on probing to evaluate understanding and question extensions.

Netflix:

[Asking] more clarifying questions. Sometimes I go back to earlier questions and ask them again in a different way 5 mins later to see if the answer is consistent.

Atlassian:

I think it is not about what question is asked but about how to ask follow-up questions or prompt the candidate to explain their thoughts/code.

DoorDash:

More rapid fire. Not copy pasting the full question to the candidate.

As with the changes to questions, these shifts in interviewer behavior are very welcome and positive signs. Across the board, they point to deeper interviewer engagement, more probing questions, and more collaboration with the interviewee.

What’s the deal with Meta’s new AI-assisted interview? And will it replace the algorithmic interview?

At the beginning of this post, I said that Meta is likely positioned to become the leader in determining what AI-assisted interviews will look like. As we already saw, Meta seems to be the only FAANG that’s fully invested in cheating detection. And from what we can tell, Meta is the only FAANG company experimenting with AI-assisted interviews.

A couple of interviewers mentioned that Meta is testing out AI-assisted interviews, and this was also the subject of the WIRED article that I linked to earlier. Will this new AI-assisted interview replace the algorithmic interview or supplement it?

Here’s what you need to know:

  • Right now, this new AI-assisted interview is being tested at Meta.
  • The intent is to roll it out to some candidates this quarter/next quarter.
  • It is NOT replacing the algorithmic phone screen. Rather, it will be part of the onsite, perhaps replacing one of the interviews currently in that loop.

As one of our Meta interviewers said:

Meta is actively testing these and plans to launch in Q3, so it’s not far away. That isn’t to imply AI-assisted interviews will occur for all roles/candidates before the end of this quarter, but they will start rolling out to real candidates by then… Like most things at Meta this will probably take the shape of "start small and rapidly iterate." I think that most likely means… an interview at the onsite stage for a subset of candidates and then we see what happens next based on outcomes/feedback.

From Ilya at Codility:

Based on our conversations with technology leaders, most see AI skills as an additional loop in their interview process, rather than a replacement at every stage.

And as a few of our interviewers said:

Microsoft: AI is looming over interviews but not yet materially. Google: AI “destroying LeetCode” is overhyped.

Depending on what happens in Meta’s experiments, we expect the other FAANG+ companies to follow suit, but some may take longer than others, and we don’t see any evidence that AI-assisted interviews will replace algorithmic ones.

We’re returning to in-person… maybe?

50% of FAANG interviewers predict that FAANG companies will soon return to in-person, compared to only 29% of FAANG-adjacent interviewers. However, interviewers from the same FAANG company were often split on whether in-person was coming back. Indeed, while 55% of Meta interviewers who were surveyed believe that Meta will return to in-person soon, one interviewer said:

We tried [in-person] onsite interviews [at Meta] again, and it was a total failure. Very unlikely we’ll go back.

Similarly, though half of Amazon interviewers believe that in-person interviews will return soon, a few explicitly mentioned that Amazon has not yet tried it:

I would expect interviews to go back in person, but Amazon hasn't shown any sign of that happening.

Amazon has not addressed [in-person interviews] internally, yet. We will need to do this ASAP.

Here’s a table showing what portion of interviewers from a given FAANG expect that in-person interviews are making a comeback.

CompanyPortion of Interviewers Expecting In-Person Return Soon
Amazon50%
Apple100%
Google67%
Meta55%
Microsoft78%
Netflix50%

Judging from these quotes and numbers, I expect that, with perhaps the exception of Apple and despite what Sundar Pichai may have said on a podcast5, there hasn’t been top-down guidance at the FAANGs on the matter.

And at the FAANG-adjacents, I’d wager that returning to in-person probably isn’t in the cards, yet.

Startups are a completely different beast, and they’re rapidly adopting AI in interviews.

Though our survey was primarily targeting FAANG interviewers, we did get some responses from startup engineers, and those told a very different story.

67% of startup respondents said that AI has meaningfully changed the interview process at their company, versus 0% at FAANG and FAANG-adjacent. These changes included:

  • Getting rid of algorithmic questions
  • Adding an AI-assisted interview to the onsite loop
  • Getting rid of take-home assignments (this makes sense, as take-homes are much more prominent among startups than FAANGs)
  • Changing the phone screen to be AI-assisted

From Amanda at CoderPad:

We’re seeing the very fast death of the algorithmic take-home. That’s over. But I think what we are seeing is people replacing that with more of a project-based take-home if you will.

So what’s coming next? Both CoderPad and Codility say that it’s still early days for truly AI-assisted interviews, but it’s happening and in demand from more and more companies.

According to Amanda at CoderPad, about 20-30% of customers are actively using AI in their interview processes “where candidates use AI to understand the code base… debug from there and actually code the way many people envision coding in the future.” CoderPad customers have run over 35,000 AI-assisted interviews.

And according to Ilya at Codility, “Usage of our integrated AI Assistant has definitely accelerated over the last couple of quarters... This is built directly within our VS Code environment and features the capabilities that developers have come to expect from their day-to-day tools.”

Are AI-assisted interviews easier?

One of the questions we asked in our survey was: Do you believe AI assistance in coding interviews allows weaker candidates to pass interviews they otherwise would have failed? 75% said yes.

And based on the response to the news that Meta might be adopting AI in their interviews, many candidates also believe that being able to use AI will give them an edge.

Before you make too many assumptions, though, think back to your first open-book test. Chances are, you were thrilled about it and relieved—at first. But then, if your experience was anything like mine, that relief gave way to sheer terror when you actually saw the test and realized that nothing, and definitely not the open book, was going to save you.

When I spoke to Business Insider a few months ago about whether companies should allow candidates to use AI in interviews, I used the open-book vs. closed-book test analogy, and that analogy made it into the title of the piece. It’s apt because open-book tests are fundamentally different because they’re no longer asking you to regurgitate knowledge you’ve memorized. They’re asking you to synthesize information that you should have grokked already and use it to solve difficult, novel problems.

Open-book tests don't lower the bar. They evaluate something different… and arguably something harder.

Here’s Amanda from CoderPad:

AI questions are 1000-2000 line code bases, and you have to add a feature in a short amount of time. Open-book is more representative of the job, but it’s making interviews harder and impossible to do without AI.

And Ilya from Codility:

Imagine an interview where the task isn't just to solve a small, isolated problem. Instead, a candidate is dropped into a large, unfamiliar codebase and asked to use integrated AI tooling to accomplish a realistic task. This could be implementing a new feature, addressing a list of security vulnerabilities, or refactoring a service to be more maintainable.

So to anyone who’s prematurely celebrating this new interview type, I’ll close with a great line from Amanda:

The AI Interview is an “addition” rather than a replacement. I can’t think of a team who has said, “We don’t need to know if you can code.” That still matters.

Conclusion

So where does this leave us? At the beginning of this post, I wondered if I’d have the right to say, “I told you so!”

This time, unlike when we made predictions about when hiring was coming back, I think we were right. Algorithmic interviews aren’t going anywhere, but AI-driven cheating has become a forcing function for interview questions and interview mechanics to improve.

In the future, I expect that AI-assisted interviews will be part of every loop, but they won’t be the whole loop, in the same way that closed-book and open-book tests serve different functions.

And what does all of this mean for interviewer quality? After all, the main reason that technical interviews have a bad reputation isn’t the questions, it’s the interviewers. I wish I could say that the new focus on engaging deeply with candidates, probing for understanding, and asking follow-up questions will force companies to take a closer look at both interviewer training and interviewer incentives—after all, if you want engineers to become better interviewers, you have to reward that behavior instead of treating interviews as an interruption between code commits.

But, after having been in this industry for a decade and a half, I’m not sure that companies will make this investment, at least not until another, future forcing function makes it inevitable. But hopefully, one day, the trope of the disengaged interviewer (off in another tab, just waiting for the interviewee to regurgitate the perfect answer) will be a thing of the past.

Thank you to Mike Mroczka for his help in drafting the survey that led to this post. Also thank you to Amanda Richardson, Natalia Panowicz, and Ilya Tilis for precious intel about how companies are using AI in interviews. And thank you to everyone who took our survey and wrote such detailed and thoughtful comments.

Finally, though I'm confident that everything in this post is accurate as of its publication date, the landscape is changing quickly. If you read this post and find that anything is inaccurate or outdated, please shoot me an email (aline@interviewing.io) and let me know.

Appendix

How to come up with good, if not cheatproof, custom questions:

Don't just re-skin existing problems

  • Changing the wording or context of a LeetCode question isn't enough
  • Questions that look custom can still be identical to existing problems
  • You need unique inputs and outputs to prevent AI recognition

Start a team-wide problem log

  • Create a shared doc where team members record interesting problems they solve
  • Include any problem, no matter how small
  • Don't worry about details; just capture the core concept
  • These will become seeds for questions that reflect actual work at your company

Distill real-world problems into something that can be asked in an interview

  • Strip away unnecessary complexity and setup time
  • Focus on the essential problem-solving aspect
  • Make sure candidates can understand the problem quickly
  • Remove company-specific context that adds confusion

Expect iteration

  • Your first version of home-grown questions won't be perfect
  • Test questions internally before using them with candidates
  • Refine based on how well they differentiate between candidates
  • The effort pays off with better signal on actual problem-solving ability

Don’t be afraid to leave a DS&A component to your questions

  • We’re not advocating the removal of data structures and algorithms from technical interviews. They’re a powerful tool for evaluating candidate potential, without requiring them to know specific languages or frameworks, which can change quickly.
  • DS&A questions have gotten a bad reputation because of bad, unengaged interviewers and because of companies lazily rehashing LeetCode problems, many of them bad.
  • In the hands of good interviewers, these questions are powerful and useful. If you use the approach above, you’ll be able to come up with new data structure & algorithmic questions that have a practical foundation and component that will engage candidates and get them excited about the work you’re doing.

Footnotes:

  1. Though I’ll save my rage about clickbait and the broarder downfall of American press for another time, I am going to call out the Fortune piece about Anthropic because it’s particularly egregious. The headline says, “$61.5 billion tech giant Anthropic has made a major hiring U-turn—now, it’s letting job applicants use AI months after banning it from the interview process.” BUT if you read the actual piece and then read Anthropic’s actual interview guidelines, you’ll see that the only “AI” that candidates are permitted to use is either for making resume tweaks or for doing research about Anthropic. So, you know, NOT during interviews. In fact, using AI during either take-homes or actual interviews is explicitly forbidden… so exact the opposite of what the headline said.

  2. We got 67 responses, mostly from FAANG employees but also from ten or so FAANG-adjacents like Stripe and Uber, as well as some startups.

  3. The one notable exception here might be Coinbase. We didn’t have any survey respondents from there, but one of the engineers proofreading this post mentioned that “[At Coinbase,] all coding rounds are AI-assisted.” I’m not surprised, given their announcement about going all-in on AI in hiring earlier this year. If anyone reading this has tips about Coinbase or other FAANG-adjacent companies embracing AI-assisted interviews, let me know.

  4. Astute readers might notice that Netflix and Apple are missing from the list. That’s deliberate. All of our Netflix interviewers said that their questions haven’t changed. Apple said they tend to ask more “real-world” coding questions. But, please take these data points with a grain of salt. Some FAANGs’ processes are way more standardized than others’, and Netflix (and Apple) are two companies where processes vary wildly from team to team. To read more about how standardized (or not!) different FAANGs are, check out our guide to FAANG interview processes. There we give each FAANG process a “Chaos Score” (Apple and Netflix have the highest two Chaos Scores!).

  5. Just like the other articles referenced in this post, this is another egregious example of clickbait. Saying something on a podcast is not an official company announcement. If it were, all of my employees would be licking doorknobs.