Ask HN: What were interviews like before Leetcode?
In the process of recruiting right now and I'm happy to see more and more companies are moving towards non-leetcode style interviews. As a child of the leetcode generation, this got me thinking - what kind of technical questions were asked before leetcode?
Were algorithmic "gotcha" questions still asked? Were the questions easier? Was the bar just higher?
I'm also guessing there was no 'coderpad' or 'hackerrank' - was everything just done on a whiteboard and pseudo-coded? In the 90’s when I was at Microsoft it was common for people to ask brain teasers and algorithm questions and expect people to be able to reason through the problems even if they walked in not knowing the particular algorithm or data structures involved. The interview was graded more on your thought process and ability to make forward progress with hints and less on getting to a correct answer in 25 minutes. People were super harsh about attention to detail, e.g. you had to write code that would compile on the whiteboard and you would get penalized to some degree for making mistakes, especially if there were enough to make it clear that you hadn’t really been using the language you claimed to be proficient in. https://www.amazon.com/How-Would-Move-Mount-Fuji/dp/03167784... was the classic book on MSFT interview questions before the software world switched to copying Google's leetcode interview style. Google found that the leetcode questions were the best metric for long term performance because they were a practical proxy for IQ and built their software empire on that style of interview. You can find references to pre-leetcode questions from Google's very early interview prep material from the time before they had data showing that brainteasers weren't as effective as leetcode questions. Then everyone switched to copying Google's interview questions because they hoped to copy Google's success. The book isn't useful anymore for interviews, but it is a fun read if you like brainteasers. Here’s an idea. What about using a computer, causes that’s what you are going to be doing. I’m just bitching about having to code a binary search using a white board at Google when I had previously written a search engine and indexed two country’s by a man who wears a cowboy hat to work. (To be fair it was a nice hat) I didn’t get the job, they claimed not good enough at coding, wtf! Well f*k them then. I’m totally on board with what you’re saying. Unfortunately a lot of tech is far too biased towards a small set of learnable skills that are sometimes hard to demonstrate in an interview setting, and the result is that a lot of good people don’t get offers for jobs that they could totally crush in. Now sure but this was Microsoft software in the 90s, no one wanted to restart their interview for a third time because the f*king thing bluescreened again. We had algorithm questions, but they weren't absurd. Whereas you'll routinely run into leetcode "hard" questions in interviews now -- which should be a dead giveaway that the candidate has memorized the answer -- back then you'd maybe get one question that peaked out at about medium. That's about the practical limit for a competent (but exceptional) mortal who hasn't simply memorized a book of trivia. (And yes, we did these on whiteboards.) It wasn't that long ago that Joel Spolsky proposed fizzbuzz as an interview screen...and he really meant that you should ask that question. The idea was that you demonstrate that you can write code using a simple test, then move on to more important factors. Can you imagine such a thing today? > Whereas you'll routinely run into leetcode "hard" questions in interviews now Where are you seeing LC hard questions? I’m in a fairly big Slack where people have been comparing notes on interviews for years. The number of people who actually get LC hard problems is extremely small. A lot of people will claim they got LC hards immediately after an interview when they couldn’t solve them, but when you ask them to describe the question and someone looks it up, it’s always a medium. There were also sites where people could report recent LC problems they received from specific companies. Hard questions rarely made the list of commonly asked questions. The only exception seems to be people interviewing at certain companies in India for some reason. I personally hit them the last time I interviewed for IC jobs, which was around 2017. In fact, I had a rather hilarious circuit where one interview hit me with a leetcode hard that I couldn't solve, and I worked out the answer when I got home. Later, I went to another interview that asked me the same question. They thought I was a genius. I remember a fad that year was asking questions about doing depth-first search in 2D tensors. Once you learned that core trick well (mainly about handling the boundary conditions and traversals without messy code, which takes practice), about 80% of interview questions opened up to you. It was crazy how many companies were asking variations on that core question. So you'd do a few interviews naïve, bomb them, figure out the answers for the fads of the day, and be well-prepared by about your 3rd or 4th go-round. Idiocy, all of it. Even leetcode medium is too much to be pushing for a correct answer in an interview [1]. It's Kabuki theater for engineers, where the interviewee is pretending not to have memorized the answer, the interviewer is pretending that the interviewee hasn't memorized the answer, and everyone is pretending that this is a "signal" that matters. [1] Assuming that they haven't seen the question, of course. I will say that a question of that difficulty level can be useful to push someone to their limits for other reasons. But these days, it's mostly just a pass/fail screen, and you do the leetcode medium perfectly while tapdancing backwards, or you don't move to the next round. Or interviewers pretending they could answer the question. The great absurdity is that interviewers find a problem, work it out for themselves with unlimited time or just find the answer, then ask it expecting someone to answer it cold. It’s a complete farce. If the world was fair, and we actually thought these problems were useful, you’d be able to turn around and ask the interviewer an equally hard question and expect them to solve it to know you’re working with quality people. Chances are they would fail. > last time I interviewed for IC jobs, which was around 2017. 2017 was a long time ago in industry terms. The FizzBuzz article came out in 2007, your interviews were 10 years after that, and we’re now 7 years after that. Things have changed a lot. It’s rare to encounter actual LC Hard problems in most interview loops. Most of the reports of LC Hards turn out to be Mediums upon closer examination. The current trend is for people to call anything a “hard” if they couldn’t get it, not necessarily because it came from an LC hard problem. I didn't quit working in tech in 2017. People are still using them -- I still see and hear interview questions -- I'm just not the one being interviewed. I don't know how "rare" it is across an entire industry...but then again, no one does. You're asserting something you cannot possibly know. > 2017 was a long time ago in industry terms. Oh, stop it. "Where are you seeing LC hard questions?" I realize this isn't quite your point but the GP also brought ups that not too many years ago fizz buzz was the literal test. There's a pretty big jump from fizz buzz to leetcode hard. In face *any* leetcode is a jump up. While not quite the same issue one thing I've found is I want to see more LOC from candidates. And a lot of the algo questions don't lent themselves to that. What I really want to see are their habits, style, and preferences. A number of new grad loops in the EU had hards this year. Companies starting with U, M, etc. Yeah, here in India people love choosing leetcode hards for when they come to a college and everyone in the college applies to them. Considering how big colleges are here, they just care about the filter rate of the test. Leetcode hards filter the candidate set down to a size that they can realistically interview, and so they continue choosing that. Mediums sometimes let through too many people. In normal hiring, i.e linkedin, or even college hiring where they look at resumes first, this nonsense rarely happens. This is awful. It weeds out the people who are creative and love to code in favor of those who will memorize answers to get into a certain company. woops It'd be better all around to tell them they were extremely rude and that's why they're being declined. The comment you're replying to is almost certainly a power fantasy and not actual behavior from an interviewer. Basically, someone I had power over was rude to me, and I got revenge with a relevant programmer story. I use a fizzbuzz-level question in my phonescreens for all levels of SDEs, and about 1/2 fail to solve it (from a wide range of experience, I've given up trying to predict who will fail). The other 1/2 quickly solve it and we move on to better stuff. Yeah, similar. If you're experienced enough, fizzbuzz is just so trivial enough that you can do it before the application. Hell, just tell them to copy paste the answer. Someone inadvertently pastes the wrong programming language for it too, which suggests they're either not reading the question or not comfortable with the language they're interviewing for. ive gotten fizzbuzz or some variation that's a play on the companies name a few times. i usually interview for like sre/devops type jobs though. 20+ YOE. I've never been fond of whiteboard technical interviews which used to be the norm, I really struggle to draw and talk at the same time. I do fine in interviews normally, I tend to be more of a delivering value for the business kind of developer and strongly emphasise this in my interviews. I'm not writing operating systems and neither are most the other people I work with and hire. These days, I don't do leetcode. If a company insists I walk away. I have better things to do with my time than memorise a bunch of useless information (for some value of useless). I'm mostly on the other side of interviews now and am firmly in the belief that you can get a limited set of signals during an interview. The "Thinking Fast and Slow" view is that we're not very good at evaluating people from a gut feeling. You might think this would push me towards leetcode and other quantitive measures, however I'm much more interested in working out whether you can be good on a team. The last thing I want is an asshole 10x engineer that makes everyone else unhappy. If you can't actually program I'm going to work that out by watching your PRs and you won't pass the probation. I'm not suggesting I do NO checking in the interview, just that I put limited stock in what can be read during this process. Also equivalent career experience, including lots of interviewing experience, and I agree completely. Leetcode problems are almost useless for determining what matters in a professional engineer. But the stuff that really matters (communication skill, clarity, patience, flexibility / lack of dogmatism, taste, constructive criticism, political savvy, prioritization of constraints, willingness to write documentation, reading code and finding bugs, etc.) is not something a new-career engineer a few years out of an undergrad CS program can competently evaluate, because they're probably not very good at these things themselves. And in this industry, that's largely who is doing the interviewing. This is true. I'm somewhat disconnected from FAANG and bay area behaviours. Before going wandering most of my career was in Sydney and Melbourne. I think in Australia early career engineers are not doing the interviewing, more tech leads and engineering managers (or whatever we're calling them now, staff and principle?). We would give interviewees a little sample of code (e.g, a minimal string copy function using a loop in C) and ask how they would make it better, without defining what better meant. Some developers would try to optimise it, others would point out the bad variable names and lack of documentation, or talk about unit tests. It was a starting point for a discussion. Not really. Some places did that sure. And some places do that now. But the cargo cutled nonsense before leetcode were the brain teasers. "How many ping pong balls can fit into a 747?". Also nonsense. But the brain teaser is also just an exercise in trying to get a discussion going. Okay so how could I possibly memorise the volume of a 747 and a ping pong ball, and even if you give me that it's a decent amount of complexity to calculate. But are we accounting for their comfort and making sure they all have TV screens? Do we have to account for FAA regulations? Is it okay if some of the ping pong balls get damaged as we try to fill the aircraft? Are we completely emptying the aircraft or leaving the seats in? Does the plane still have to fly and if so which areas have to be accessible in FAA regulations? What is the weight of a ping pong ball and if you have enough of them does that come close to the allowed take-off weights? Some of how you respond to these is an indication of how you will respond to challenges and frustrations in the team. This was always the claim, yes. And much like how these days people complain about getting rejected for slipping up on a leetcode problem in a minor way, back then there were plenty of stories of interviewers treating these problems as a black & white problem where you were expected to just know the answer. To be clear, the problem wasn't the question. It was the interviewer asking the question. This book was about puzzle interview questions from Microsoft in the early 2000s: https://www.amazon.com/How-Would-Move-Mount-Fuji/dp/03167784... Also, a famous question around that time was 'why are manhole covers round?'. (Thought there was a book from that era with that name but not seeing it; only found a newer book.) Remember doing interview prep at the time by trying to memorize the answers to a bunch of these puzzles. Leetcode is useful in comparison to that. I had an interview once where the interviewer could tell I knew these and he was asking me to tell him about others I knew about. "Remember doing interview prep at the time by trying to memorize the answers to a bunch of these puzzles. " I remember bumping into a friend on the subway, he was coming back from an interview. We were trying to work out some cockamamie puzzle involving barnyard animals crossing the river. He felt he got it right, but one of the interviewers grilled him. When I got home I googled the problem and found out there was a generally accepted answer that was wrong. And that's what the interviewer was pushing. My friend had arrived at the correct answer. And he didn't get the job. Ah the crossing the river puzzle, got that during my campus interviews. The manhole cover thing was always confusing for me, as where I'm from manholes are all square. Wonder if people where you're from run into problems where the manhole covers tip onto their diagonal and fall in (since that's the answer I memorized and I think is known as the predominant reason.) Isn't actual reason that round shapes handle pressure better. So in the end it is about shape of the hole not the cover. As holes in ground are not square. Then again I think here we have both, square and round. Square ones are more for rain runoff on sides of roads. Where as anything in middle of road are round usually with no grating and some rain water run off locations are also round for access again. yeah I also memorized that answer, but my follow up question is: storm drain grates[0] seem to be commonly made in a rectangular form, why doesn't the same issue apply there? It depends where you live and apply. In France I never had whiteboards, leeetcode, or psychological puzzles. It has always been “let’s talk for an hour about this specific language that we use so you can prove you’re a good fit.” Agree. I'm talking about the US. Much like today where the belief that 100% of the companies do leetcode, 20 years ago the belief was that 100% of the companies did brain teasers. Neither are true. And that was my real point. There was still the annoying cargo culled crap that way too mnay companies used, but back then, just like today, plenty of companies don't do this. And that's your point. EDIT: Don't get me wrong. It's annoying as fuck how many companies interview this way. It's just not as ubiquitous as HN leads on So Fermi problems are considered brain-teasers? We considered those especially useful for PMs. Back when a fast microcomputer ran at 25 MHz and had 4 MB of RAM, back before Stack Exchange existed for people to copy and paste from, back when documentation came in a hardcopy book or CD, back when compilers were something you paid hundreds of dollars for, back before there were publicly available frameworks and libraries that were open source and free as in beer, the bar was much, much higher and knowing at least the basic algorithms was the norm. Whining about coding during a job interview would have gotten you laughed out of the room. It was usually a combination of algorithmic questions, some riddles, a few technical questions, etc. All of it was done on the whiteboard. The thing is that at the time there were no books around commonly asked interview questions, online leetcode lists, etc. so the assumption was that the applicant was basically going in cold, and that there wasn't an expectation that they would be able to instantly solve the problems. It was more important for them to talk out loud so that the interviewer could observe their thought process. There were ways to get problems, though, such as prior questions for the ACM programming contest. Those tended to be a little harder, though. Also, topcoder started in 2001. Companies that liked to do stupid "gotcha" questions still did those, they just expected you to regurgitate the answer onto a whiteboard or (occasionally) into an IDE on a provided laptop instead of a website. All the other companies did the same as the non-leetcode companies do today. IQ test type things ("next shape in the sequence"), Fermi problems ("how many piano tuners"), behavior questions ("tell me about a time when..."), code review ("look at this [bugged] code, what improvements would you suggest"), systems analysis ("explain the sequence of events that happen when..."), systems design ("draw an architecture diagram of a project you recently worked on and explain it") etc. Smart and gets things done, by Joel Spolsky: https://www.joelonsoftware.com/2006/10/25/the-guerrilla-guid... Still a great read. Base idea is that they're smart enough to solve problems and they get things done and don't just blab about ideas which they won't implement. So they'd write code to prove they can get things done. Brainteasers were okay, but bad at testing the latter. This somehow evolved into Leetcode culture or take home assignments. "Smart and gets things done" Hunter & Schmidt 1998 [1] pretty much boils down to this. Their meta-analysis, predating Spolsky, winds up stating that the only useful thing is a combination of work samples & an IQ test. Never the order and that smells a lot like "are they smart and can they get shit done?" For a couple decades now, the way I have interviewed people is to ask a simple, very-high-level question, then repeatedly asking either "So how does it do that?" (drill down), or "What happens next?" (back out). For instance: What does 'printf("hello, world\n");' do? Obviously, it prints something, but how does it do that? Pretty quickly you're talking about includes, macros, libc, linking, machine code, system calls... One question can easily fill an entire interview slot. The fun thing is there's no "right" answer. Nobody is expected to know everything about how software works, but everyone is expected to know something. This format gives the interviewee the opportunity to show off what they know best, and the interviewer gets to pry in to see how deeply they know it. I'm a low-level guy so that's the direction I tend to probe. Usually someone else asks a similarly abstract high-level question. One of my favorites is: "Design a parking garage". Again, there's no right answer. It's a prompt for the candidate to show what they know. Very quickly they're coming up with functions and class hierarchies and/or data structures for vehicles, spaces, turnstiles, payment kiosks, figuring out how to pass them around, etc. The interviewer has plenty of opportunities to pry into design tradeoffs, add complications, and so on. The grand idea is to have a deep conceptual discussion instead of just seeing if they can write a few lines of code. This also demonstrates how well they can communicate. The catch is you have to be sure they give some actual concrete answers in a few places, and aren't just fast talkers. I do the same. The last hiring I did, I sent them a homework assignment that shouldn't take more than an hour. Then, we started talking about it. I asked to explain what and why was done. The started expanding into the sides areas of the solution. Then zoomed out and we discussed what would be the implications of doing x,y,z in a wider context of the system. The natural flow of the conversation does a couple of things: their social skills and how they approach unknown situations. I've hear plenty of 'I don't know' which was absolutely fine and much better than some fake confidence. > Pretty quickly you're talking about includes, macros, libc, linking, machine code, system calls.. Maybe? > The fun thing is there's no "right" answer And this is the key. keep an open dialogue. Always probe a layer or two down. Don't enter with preconceived notions of what's "right'. It turns out very few things in our field are binary (har har). Instead - can they talk shop? Can they demonstrate that they didn't just read some crap from a blog? Is this from experience? Do you like them? Do you think they'll get along? Yeah, these are my favourite types of interviews on both sides of the fence. A great way to just try to get inside someones thinking. I used to like pair programming interviews as well, where you just implement something together. We used to do a take-home assignment (which probably LLMs have ruined now) and then extend it further in the pairing interview. There was no one right way to do the assignment. Different approaches (functional, object oriented, tdd/bdd) would all become part of the discussion. I work as an engineering manager at a medium-sized company and my experience is fundamentally different from what some of you describe. I gave up to do any live-coding or higher-level algorithm questions at all. For me the main challenge in a software engineer‘s daily work is product problem solving and that’s what we aim for in our interviews. Especially with recent advances in AI-assistance it becomes more and more crucial to learn fast, and have the ability to apply knowledge to actual problems, no matter where the knowledge originates from. That, solid bug-hunting capabilities and a good understanding of the big picture and the business problem you are trying to solve. I‘d even argue that nowadays communication skills are much more important than any memorized knowledge about algorithms or a given technology. Communication is what makes you a successful developer, ironically - even when prompting an AI assistant. Fun fact: 10 years ago I had an exam-like testsheet that I handed out to candidates and gave them one hour to fill it out, including paper-based coding. It makes me feel seriously embarrassed when thinking about that with today's experience :) .. When I was interviewing candidates in the 90's and 00's, I did the whiteboard thing, but not to implement strtoul (my Microsoft interview) or some algorithmic thing. I'd pick out a recent problem that was solved in our actual code base, clean out anything proprietary, and distill it down to something that could be designed and coded in about 40 minutes or so. That way, I could cover domain knowledge and coding knowledge in a microcosm of what the interviewee would actually be doing day-to-day. If I were doing it all over again today, I'd skip the whiteboard and bring along a laptop loaded with our compilers and toolchain and any supporting libraries needed to solve the problem. I'd mirror it to the screen in the interview room so we could discuss the solution as we went. Was the bar just higher? In web dev interviews, in the UK, the bar was much lower. It was rare to be asked anything that resembled a leetcode question. Everything was focused around the technology stack you'd be using, the culture of the company, and 'agile' (before people called it that). In the eighties it was completely nonstandardized and depended heavily on each person you talked with. You would typically visit with a company for the day while they passed you around from person to person. One might have you code something on a whiteboard. Another might ask you a brain teaser. Someone would go through the experience on your resume. And another might ask you something like "how would you figure out the number of pixels needed to make this wall indistinguishable from a picture (i.e., retina display)" just to see how you think. Lunch was often where team fit was assessed. There were riddles and brain teasers. Eventually people realized that people who were good at those weren’t necessarily good at the job. Eventually people will realize the same about leetcode. this is what i heard places like google asked.questions like how many marbles could fill an airplane. idk if it was just like urban legends though. I'm pretty sure Google published a list of such questions themselves at some point. But as I remember it, they were always presented as "show you can think" not "know the answer". That's how how leet code questions are presented now hah My interviews in the decade before the rise of leetcode, were algorithmic questions very similar to leetcode. Surprise !!!!
I think people are crying way too much about leetcode. I haven't had to be an interviewee in 6+ years, but I can understand why people are upset. Competing with other applicants who spend dozens of hours per week memorizing leetcode solutions sounds not fun. I've been working in startups for around 15 years, remotely for US companies and AU companies. I hadn't even heard of Leetcode until a few weeks ago. A lot of companies still don't use Leetcode, at least in my experience, so I don't think we need to wonder too hard! Change hasn't been too drastic since I entered the business in the late 1990s. Salaries used to be less inflated, so hiring interviews were a little more lax, because hiring a $60K/yr developer is much lower stakes than a $150K/yr developer. You were also competing against fewer candidates. "Back in the day" before WFH they might have had 10 or 20 applicants instead of 200 or 2000. So there was less automation and more human factor. There were often still coding exercises. Your portfolio/work history mattered, as it does today. I think there was much less awareness of "software engineering" (sustainable, scalable processes like source control, CI/CD, etc) as opposed to like, "just hire a guy who is smart and writes code good." A lot of programming jobs at smaller shops were really kind of like hybrid sysadmin/coding jobs. You might also be fixing peoples' printer drivers and shit, in addition to coding reports and data imports or whatever. 30 years of freelancing and most interviews (Europe) have just been conversations, first with business, and then with a couple techie team members. Which always felt like more of a 'does he fit in' type approach. The business side along the lines of 'what value can you add', and the technical side was more about describing problems you had discovered, communicated about, fixed, and details about the methodology, or APIs involved. I rejected interview requests with google et al knowing of the time wasting LC approach. I think most developers are there to solve real world business problems, not rewrite an OS, although actually that is currently my hobby project :) With AI assist, the focus on solving business problems comes back to the forefront, and the LC can be done by the AInt so bad coder. Or I think it was mentioned here recently that coders will soon be relegated to AI reviewers, what a horrible thought. In summary ;) I think I'd be interviewing programmers focusing more based on business domain knowledge at this point. Leetcode is significantly overrepresented on HN. Leetcode is barely a thing across most of the world. Mid-career dev jobs in non-FAANG (and their orbiting startups) organisations don't expect people to spend their time studying for leetcode interviews. Leet is definitely not a thing in other industries - you don't interview for, say, a logistics co-ordinator, by making them study and regurgitate logistics theory. The short answer to your question, and to try and not start up another HN thread on how good or bad differing interviewing techniques are, is that we did things similar to how most recruiters and hiring managers are still doing it across the world. I think the big difference is more around getting to the first interview. Technology means that there are more applicants, automation of filtering (bad), and remote interviews than there used to be in the past. The funnels to get candidates may have changed, but in my experience, once in the final stretch things are pretty much the same as they have always been. Leetcode? I find it interesting that there’s an assumption that if you exist in this world as a professional software engineer and are successful by all reasonable measures that it somehow predisposes an exposure with leetcode. I’ve never used it for interviews not because I put energy to avoid it, but because I don’t think it’s all that popular, or maybe I just don’t interview often enough. If I want a new job I wait until I’m emotionally done with the one I’m at and none of the places I have ever interviewed have used it. What, is it that if you’re not using it today then you’re somehow “behind”? I don’t understand this post at all. What a loaded assumption. Does it do something only Leetcode can do? Is it some holy grail? I’m just burnt out on this tenor of the community here, as if any of these platforms are seated as some kind of hegemony of “the engineering scene” Yawn. An interview is and will always be a balance of your technical skill and your ability to present your work and deal with timely feedback. That’s it. So for people who wonder wtf this post is about, you’re not alone. > An interview is and will always be a balance of your technical skill and your ability to present your work and deal with timely feedback. That’s it. An interview is whatever the interviewer wants it to be. Often it’s a probing of your knowledge of data structures and algorithms via Leetcode style questions. Like, very often. How many times have you interviewed in the last 10 years and for what kinds of positions? > An interview is and will always be a balance of your technical skill and your ability to present your work and deal with timely feedback. That’s it. That's the ideal case. However, reality can be very different. Some companies reach for leetcode-style questions because they don't have a clue how to do the interview process the way you describe it. > What, is it that if you’re not using it today then you’re somehow “behind”? That is not something you get to decide. The hiring manager will judge you if you are behind if you can't answer leetcode-style questions. This is not a question of knowledge, but a question of power. My first job, I had an hour or so to learn vi, and show what I'd learned, then write a basic program (not much more than hello world) in C, of course using vi. Second job similar actually, only this time it was their awful proprietary word processor (a Wang clone I think) and it's macros ans PL. Then for a big-name computer company, a full day of IQ tests, personality tests, 3-4 interviews, whiteboard pseudo-code, team meets, etc - but all candidates processed in a single day and an answer a few days later (I got the job) - not the same as today where it's spread over weeks or months and you might be ghosted at any stage. What would they have done if you already knew vi? Make you use ed? Maybe emacs, so it's lucky I didn't know vi, lol. Mid 90s at Apple, questions were often very specific to the platform. One manager I knew would bring a printout of a macsbug stdlog - basically current stack trace, disassembly around the pc and the state of various system globals, and have you work through how it got there. As platforms have exploded I wouldn’t expect that level of familiarity anymore, but if I’m hiring for a c / c++ engineer I expect you to be able to describe in hand wavey terms what a structure looks like, and be able to write an accurate memcpy (and I don’t care about syntax!) Among the previous techniques were questions like how many apples would fit inside Mt. Fuji. It was an invitation to do some talk aloud problem solving on some unknown answer with the purpose of gaining insight into the candidates cognitive processes. It turned out it was also a heuristic for readership for some science magazines. I was totally surprised by them early in my career and again floored when I learned that some families would discussed such problems in the car. Didn't map to the job so it wasn't a good proxy for fitness. In 2013, I got asked this question at Yahoo for a Frontend job: Explain as detailed as possible what happens when I click a button? —— Around 2004, I recall college friends talking about an exam Google published. I think one question was: What is DEADBEEF? —— In 2008, the “cracking the coding interview” book was published, and it is like leetcode but the problems are well formulated. Just a casual conversation to see if there's a match. At least in Brazil, some companies still use this approach. My current company does for high level positions, as well as another one where I was interviewed. They include two or three architecture questions, though. Unrelated to the original question: I know it’s unpopular to say here, but I found studying data structures and algorithms to get ready for job interviews made me a better developer. "What have you done?" "How did you do it?" "What problems did you encounter?" "How did you solve them?" It's not hard to recognize an impostor. in 2012 an interviewer asked me what would happen if i removed the <!DOCTYPE html> tag. very unexpected question but i guess it was testing my /very basic/ fundamentals. Nope. No pseudo code. Definitely lots of whiteboarding. Lots of "recall arcane facts". (Like "Why did our counter stop counting up at 16M" - the answer being "because you used float for a counter, what's wrong with you") And definitely lots of algorithmic questions. Better questions than leetcode, though, because they were usually grounded in an actual problem the interviewer had. M1 here. I used to have those "actual problem" questions for my interviewee, but my parent company's HR said we'd have to stick to questions that would yield better "yes/no" on a candidate than "how they think". In the end we were forced to use LC as a source for questions. Sigh. I am sorry to hear that. And not surprised. (You can approach "how they think" as a question with strict rubrics fwiw, but it's a somewhat uphill battle. And not one you can win as an M1, usually) The same as Leetcode but on whiteboards. After giving about 3000 job opportunities world wide , our agency is still giving out Jobs and Business loans worldwide. if you need a job or financial aid kindly contact us now via email : shalomagency247@outlook.com Thanks.