Hacking a Google Interview - MIT's guide to Google interviews
courses.csail.mit.eduI've interviewed with Google, Apple, and Microsoft. It always strikes me that the questions they ask you during interviews are not at all representative of what you need to be able to do on the job. That's why these course materials are incredibly valuable.
You have a better shot at these types of interviews coming straight out of college than you do after X years of job experience. The interviewing mindset is very different from the working mindset. If you want to do well, you should study these PDFs for a few weeks (repetition is the mother of learning) before your interviews. I did exactly that before my last round of interviews, and it made a huge difference.
I didn't post them on HN because, honestly, I don't want them to be too publicized. These are the exact questions that you will be asked at these types of interviews. It's borderline cheating.
Man, you can do good at copy-writing. Actually, I found your reply quite appealing and it follows some guidelines of lots of copy-writing books that I'm trying to read these days. But,anyway, thanks for sharing.
it follows some guidelines of lots of copy-writing books that I'm trying to read these days
Agree that the comment was good, what guidelines are you thinking of?
I've done the Google interview thing (had 5 interviews, got asked to do more, declined due to my wife taking another job), and I can confirm I had some (I think 4 or 5) of these questions.
In my interviews I generally started with a naive answer, got the good answer on my own and got to the best answer with some hints from the interviewer. They didn't seem to regard this as a bad thing.
At one point when I started working on the "better answer" straight after a "good answer", the interviewer asked me why. I confessed that it looked to be a similar class of problem to one in the previous interview, and he told me that was excellent.
There were two problems I did badly at. One was a math question (number theory), which I got after a bit of work. The other didn't really get anywhere with. It wasn't included in any of these linked items, either, so I guess I can't say much more about it. It was a fair question though.
Can I ask what interview stage you were asked these in, and what level the position was for?
While these are good questions, in my experience they're so widely known from books/websites that I'd be amazed if a company like Google was using them for anything other than early screening of candidates.
I did the phone screen, plus on-site interviews. It was for a developer position - I've got 10 years experience.
I was going for an overseas position, which complicated the process, because they had to fly me to another city for a combination of onsite and video interviews.
Obviously they asked other questions, too.
My impression was that they hadn't planned on doing another round of interviews, but the mixed feed back I got confused them (I did very well on at least on question, but totally bombed on another). Who knows, though - the whole interviewing process was a bit of a mess to be honest.
I realize the notes are for preparing a potential employee for a Google-style job interview.
I am curious if there are any employers out there with tips for follow employers about how to screen candidates.
I find that a vital skill is understanding good code design, that is, placing the responsibilities with the correct code, keeping dependencies down to a minimum, proper encapsulation of the complex stuff, separation of concerns, etc.
I recently hired someone who is a decent coder and problem solver, but I regularly have to “correct him” when he starts new submodules because he still doesn’t master the “design” aspect of coding.
That is, he might do some rudimental analysis about how to structure things and what the API should be, but if I don’t stop him, it always ends up being more complex than need be, and I honestly don’t think he knows how he could do it better, because it is really the same things I say to him over and over.
Since I will hire more people down the road and since I am tired of having to point out the same design flaws over and over, I am interested in tips for “testing peoples design skills” and also book recommendations about how to master big software projects without it ending up as tightly coupled components with arcane code — which sadly seems to be how most projects do end up…
Complexity can sometimes be a perspective thing. I recall a discussion at one point with a co-worker were we were each arguing that the other persons approach to a particular problem was more complex. After much contemplation I was able to see both perspectives.
Also with some problems there seems to be a conservation of complexity notion. It's inherent in the system and depends on where you push it.
For the book, you might want to check out Large-Scale C++ Software Design. Unfortunately it's at home so I can't crack it open and double check, but it seemed full of suggestions on structuring code and layout a large software project.
Yes, I agree that a lot about how to structure code is subjective, which is also why I am very interested in having better terms to describe this — thanks for the book suggestion!
Giving it some more thought, I think it boils down to realizing when one is solving more than one problem, and then being able to write the code so that each problem is solved in isolation, yet without introducing complex APIs between the different pieces of code.
Often though the sub-problems are subtle and hard to predict, plus realizing that two problems are being solved is not in itself enough to figure out how to actually split it up (sometimes splitting things up contributes to the perceived complexity)…
My favorite book of this type is Thinking Forth, available online at http://thinking-forth.sourceforge.net/. You will get a large dose of the Forth mindset, but it's mostly higher level design tips and not as specific to Forth as, for example, How to Design Programs (read: Functional Algorithms in Scheme). It's a fun read, and a great way to get a perspective on design that is different than what you get from the usual object-oriented or functional language centric sources.
The article is totally fine for HN. However, I don't think it's "hacking" a test if I study for it, so why "Hacking a Google Interview"? Is "hack" the new "pwn" or something?
The "hacking" aspect is that during the interview instead of mentioning you've heard the problem before you're supposed to pretend you're a genius who solved the problem on the spot.
Actually I think that advice is misguided, as in my experience when you tell an interviewer you're already familiar with their favorite brainteaser they're just as impressed as if you solved it yourself, and you don't feel dishonest.
I guess in this case hacking is tantamount to optimization by restricting the types of problems you'll study (if you believe the guide).
Um. Just had a look at Common Questions Part 1 (http://courses.csail.mit.edu/iap/interview/Hacking_a_Google_...), and the first question is about substring matching.
Instead of giving a Boyer-Moore style solution (with sub-linear complexity), it gives the naive approach (with quadratic complexity).
Are there any MIT students who would not know these things? Big-O notation etc?
From their introductory page:
It covers time complexity, hash tables, binary search trees, and some other things you might learn in 6.046. However, most of the time is devoted to topics you won't learn in class, such as crafty bitwise logic and tricks to solving problems.
The purpose of mentioning the Big-O in that document wasn't to inform about what it is. Rather, it was to note to the candidate not to go into technical detail during the interview (i.e. we like the intuitive version).
Just because you graduated from MIT doesn't mean that you're smart. I find that school is usually about how much you can cram in your head rather than how much you actually know. Most students can parrot back that a quicksort is an O(n log n) algorithm, but fewer will actually be able to explain why or what that even means.
Probably, if the student was not an EECS major. I studied physics there, and Big-O notation was not something that came up. Heat and entropy, however, was things that we talked about a lot.
They get the definition of polymorphism wrong, because their definition ("ability of one method to have different behavior depending on the type of object it is being called on / object passed by parameter) is also met by compile time function overloading. Their example, where you have a customer integer class that is capable of dealing with both other integer arguments and floats is pretty poor.
Any definition of polymorphism should at least mention something about the method that is invoked depends on runtime data, and is not resolved (or known) at compile time.
Any definition of polymorphism should at least mention something about the method that is invoked depends on runtime data, and is not resolved (or known) at compile time.
At least in C++ world, it's not true. There things like compile time polymorphism, and run-time polymorphism.
And by the way, why do you think polymorphism should be based on run-time data? simpler things can be achieved at compile time with static analysis, which many OO languages do [C++, and Java for sure].
Just because it's simple to do, you don't have to declassify it from being polymorphic.
Whenever I think of polymorphism in C++, I think of base pointers, virtual functions and the vtable. I guess my mind just puts run time and compile time in separate boxes.
Broader definitions than yours are in common use. You might find this classic paper: http://www.is.pku.edu.cn/~qzy/plan/lits/FundamentalConceptOf... and this updated survey: http://lucacardelli.name/Papers/OnUnderstanding.A4.pdf interesting.
Any definition of polymorphism should at least mention something about the method that is invoked depends on runtime data, and is not resolved (or known) at compile time.
I'm not sure that's always true. Or at the very least seems language dependent.
Yeah, Java doesn't do multimethods (i.e., method dispatch based on runtime argument types).
If Class X has two methods:
then:void foo(Object i); void foo(Integer i);
will invoke 'void foo(Object o);' based on the compile time type of test.Object test = new Integer(4); X x = new X(); x.foo(test);Java does do dispatch based on the runtime type of X - but it is quite limiting once you are used to having proper multiple dispatch at your disposal.
(caveat: not in front of a javac - this is from memory).
You're right. It calls 'void foo(Object o);'
import junit.framework.TestCase; public class TestMultimethods extends TestCase { public void test() { Object test = new Integer(4); X x = new X(); assertEquals(x.foo(test), "foo(Object)"); } } class X { String foo(Object i) { return "foo(Object)"; } String foo(Integer i) { return "foo(Integer)"; } }
"Any definition of polymorphism should at least mention something about the method that is invoked depends on runtime data, and is not resolved (or known) at compile time."
I think the emphasis on runtime data is a mistake. It would be possible to imagine an static, OO language which has polymorphism implemented using static analysis - perhaps in limited circumstances.
(I agree with your point about their definition also being met by overloading)
Actually, polymorphism is when two or more clearly different phenotypes exist in the same population of a species — in other words, the occurrence of more than one form or morph. </tounge-in-cheek>
Polymorphism is certainly not a very well defined notion. Personally, I think the best answer would be "I don't know a precise definition, I've heard it mean different things based on context, for example...".
Any definition of polymorphism should at least mention something about the method that is invoked depends on runtime data, and is not resolved (or known) at compile time.
Yes
is polymorphism if anything.class A { ... f() ... } class B extends A { ... f() ... } class C extends A { ... f() ... } A x = ... // a B object or a C object x.f()> They get the definition of polymorphism wrong, because their definition ... is also met by compile time function overloading.
That doesn't make the definition incorrect; there are many types of polymorphism.
Function overloading is an example of ad-hoc polymorphism; generics is an example of parametric polymorphism; and inheritance is an example of inclusion polymorphism.
I have actually tried but could not find any scientific evidence that these type of technical interviews actually result in the best hiring decisions.
When I worked at Amazon.com they did analyze interviewing techniques, and I assume Google and other large companies do too. They didn't do controlled scientific experiments (as far as I know), but they looked at things like which phone screen questions were correlated with success in later on-site interviews, and did post-mortems of "wrong" interviewing decisions ("no" votes on someone who was eventually hired and successful, or "yes" votes on someone later found to be unsuitable). The data that comes out of this is not exactly scientific, but it does represent lessons learned from experience. [For what it's worth, a typical Amazon interview is a lot like a typical Google interview.]
Indeed. These types of interviews are much more about finding people who "think outside the box... just like me!" than about finding people who can do the job.
It would surprise me if there was any scientific evidence of this kind. This would be incredibly difficult to measure, control for other variables, arrange for a sizable sample, etc.
They seem to get finding the median slightly wrong:
"Note that finding the median of an array is a special case of this where k = n/2."
I'm pretty sure it should be k = (n+1) / 2. (If there are 5 numbers, you want the 3rd one, if there are 6 numbers you want the average of 3 and 4.)
k = n/2 is correct when you are dealing with zero-indexed arrays.
For arrays whose length is even, I think it's also acceptable to take either one of the two values (instead of the mean of the two) as the median value. Wikipedia is fairly ambiguous on this as it only states "one often takes the mean" and "sometimes one takes the average of the two median numbers".
I believe that for a large enough sample size the discrepancy shouldn't even matter all that much.
Most companies blacklist interview questions that are highly publicized. Maybe companies should also ask if you've read questions online before an interview begins. :)
That would be an interesting meta-question... If you admit to studying interview questions, the interviewer could interpret it as being intellectually curious, practical, or honest. On the other hand, they could interpret it as being insecure, weaselly, or simply unevaluable.