Ask HN: Code examples to find out if a person is addicted to “overengineering”
I know one person who creates as complex solutions as possible to any kind of programming problems. And it has bad impact on business and projects. So, I want to know if someone knows any kind of questions or tests or code examples which I can show to that person and see what decisions he will take. Have them build "A framework for X", where X is some commonly-done problem most programmers in their field have worked on (webapps, CMSes, forums, build systems, big data, etc.), ideally related to your business. If you want to do this in an interview or take-home setup, you will likely need to use a cut-down version of this problem, although really, much of the point of it is to see just how much the problem expands. An engineer who doesn't overengineer will first ask a lot of questions about why you need X, and then will build X, and then only after you have tweaked the problem requirements will generalize that into a framework. They are likely to think this is a bad interview question, which it is, and push back to get the actual concrete requirements. That's exactly what you want; overengineering happens when people guess at the requirements instead of making sure of them. An engineer with a propensity toward overengineering will love this question, and immediately jump into all the features their framework will support. They'll carefully get lost in thought brainstorming new features that might be cool, but when it comes time to implementing it, they'll either completely fail to reconcile all the contradictions that have crept into the design, or they will come up with a very complicated mess that does everything but trades off unspoken requirements (like performance or simplicity) for them. This mirrors how frameworks actually work in the real world: basically every framework that people actually use was extracted out of a working app that solved one problem well, while every framework designed from scratch is an overengineered mess. > will first ask a lot of questions about why you need X Have an upvote, I love advice like this: a good engineer has the capability to have the one posing the problem also reflect on the problem and the requirements and should never just say 'ok let's do this' without making sure it is really needed. Because face it, there are a lot of people out there coming to engineers with a question and a 'solution' which is actually already partly an implementation - in a domain they're not really good in. (example of that style: I want a sandwich! Engineer, build me a sandwich vending machine!) Good engineers know how to deal with that as well, and will ask as many questions as needed to figure out the optimal (combination of good enough/least work/least technical debt) solution in a given situation. Which, sometimes, might be as simple as 'wont fix, no real need for it'. I don't know if I necessarily agree with this. I'm not necessarily coming at this from the question OP proposed, but from an interviewing standpoint. I've seen a lot of interview questions that didn't necessarily have a real use in reality, so asking the interview _why_ they wanted something implemented can be seen as badgering them and like you might not be a good fit. On the other-hand, if you were to focus your questioning on the implementation, and the interview was fishing for you to question _why_ they wanted something, you may be seen as an someone with a propensity to over-engineer. It's potentially a lose/lose situation. Good thoughts here, but there's some generalization in the closing paragraph. Not every framework from scratch is unuseful or a mess. But the track record there is very bad in my experience. Thanks a lot! It's not an addiction, its a phase. When an engineer is young, they will do anything to make something work. When an engineer is older they've learned how to make things work, but then learn that changing things in the future can be hard. So they try to build as many hinges in as many places as possible. Many years later they will realize that only 1/10 of their "flexible" solutions needed to be as flexible as they made it, and then they become experienced developers who know how to build something that works, how to build something that can be flexible for the future, and most importantly they learn how to question if the piece they are making needs to be flexible (or even better question the business if it needs to be built at all). I'd say the test shouldn't be, how can you find out if they are in the overengineering stage or not. Instead, I'd filter for engineers who can be reasoned with. If you are an engineer in stage 3, it's okay to have a stage 2 engineer on your team, but if he can't take direction then he'll be a liability. When a candidate solves a programming problem, they typically don't do it with a minimum of abstraction or complication, because employers aren't interested in that. Instead, they'll try and pick something that they want to communicate through the medium of the exercise. If they choose unit testing, the solution might be over-engineered to be testable at a much finer level than would be necessary in production code, and have more abstractions than usual. Or the candidate might be trying to communicate some kind of design aesthetic, whether it's familiarity with functional programming, object orientation, design patterns, or some framework du jour that requires a lot of boilerplate. All these will smell a lot like over-engineering on typical toy problems seen in programming problems. IMO it's very easy to get a mismatch in expectations and presentation in the solution to a programming problem, such that the company and the candidate aren't communicating on the same wavelength. Unambiguous, very simple problems with binary solutions may be better as a basic filter (e.g. Codility or something like it), followed up with pair programming in something more similar to a working environment, where pair communication and direction can get people on the same page. My stock interview question: > Tell me about the project you are most proud of, which best demonstrates what you're capable of. It could be academic, it could be professional, but be careful to not share any proprietary information from previous employers. Then I proceed to ask a bunch of code, design, and implementation questions based on that scenario/project. If runtime is important, how they optimized for it; if scaling seems difficult, how they approached that; etc. Note: This is a varsity-level interviewing question, since you need to be able to quickly come up with meaningful and consistent questions based on their proffered product. It works very well though when interviewing intermediate/senior engineers. How can you be so sure you aren't underengineering, and this person you know is actually planning ahead wisely? Personally I feel as if underengineering is less of a problem than overengineering. I've seen both in real life scenarios and usually you just throw the underegineered code away and start over with knowledge gained from the previous solution. And the reason you can do that is because you usually realize that you have an underengineered solution fairly early. A truly underengineered solution doesn't provide mechanisms to allow it to be refactored or deleted. Details. If someone can provided a detailed analysis explaining that it's very likely we'll need feature X in the future and it's more efficient to architect it in now then slap it on later, that's planning ahead. If someone wants to do something just because of a vague one-liner about "future needs" or "best practices" and when you press them on why they don't really have any coherent reasons, that's over engineering. It's typically mediocre engineers who don't really understand the concepts and are just applying a formulaic solution. Just ask them to describe an original code or system design they are particularly proud of. That will reveal what design principles they value the most. Thanks! It is a good example) Ask them how they would, on a Unix system, do some task that's trivially done via a standard POSIX shell command. E.g. "how would you count the number of lines in a file?", "given a regex, please print lines in a file matching the regex", "given N lines, sort them by numeric value" etc. The most egregious over-engineering I see is someone writing a 100 line Python script for something that could be at most a 100 character shell one-liner with a couple of pipes between different programs. If they pass that ask them about a problem that would be trivially solved by resisting the urge to reinvent make, a HTTP server, or an SMTP server. Edit: Re downthread: Yes obviously only in the context of someone expected to know *nix in the case of those examples. But the general approach is very transferable. Ask the candidate about some trivial problem solvable in 1 minute with standard tooling they use every day, you'd be amazed at the architecture astronauts that come out of the woodworks keen to waste their time on some needlessly over-engineered solution. You're assuming this person has in-depth knowledge of shell commands. If that's what you're testing for then fine, but I wouldn't say it's a good measure of over-engineering. They might have tons of python experience on Windows and haven't used grep in years, so that 100 line Python script might be the most efficient solution they can think of. Also, any interview where I'm expected to have memorized obscure shell flags and one-liners with no access to so much as a man page can go fuck itself. That's just an IT/DevOps spelling bee. That's hard to do without some context. They may assume, since they are there for a development interview, that the interviewer wants to see code. That implies that you're interviewing for a position that requires Unix sysadmin tasks. Without knowing anything about your interviewee, he might've grown up programming only in Eclipse/Visual Studio/XCode/etc., and might not even be aware of the Unix tool suite. In this case, I wouldn't find it surprising that he uses the tools that he knows. Ask them to make some simple yet non-trivial feature, like a contact form on a web page. The overengineer will introduce a build system, several functional programming utilities, an HTTP client wrapper library, a DOM abstraction library, and maybe a full blown framework. The competent developer will write a <form> element with inputs. Are there actually experienced developers who still overengineer? I always thought of that as something that happens from the lack of experience. Straight from university developers, or people who never worked with a team before tend to do that, but I can't see how you will hold off this attitude after a bit of experience. And unless you're letting an inexperienced developer lead your team I can't see the problem. As long as the team leader eventually communicates and pushes an overengineer in the right direction (make it work > make it simple > make it generic), is this really a problem? I worked with a guy who felt a burning need to use a certain JS functional utility library for literally everything. He would even use the library to access properties of objects instead of using JS's built in dot operator. This led to a lot of over-engineering on the micro level, although he had good practices on the macro level. The project was well thought out, but open any file and you'd see a bunch of this library's fanciest functional tricks. I found it pretty bizarre. My guess was that he simply applied a different standard to code than most other people. Instead of preferring simple and obvious code, there was some other arbitrary criteria of cleverness he was looking for. This guy was very experienced as well. I think it may just have been boredom. I have seen experienced developers that overengineer. It's hard to definitively know why, but I suspect sometimes it's so they can get some new "thing" on their resume or added to the approved toolbox for their team. I tend to think of overengineering of a team member as the mistake of the team leader, unless you're really hiring a solo developer then you'd better pick an experienced one. If you communicate your priorities properly, you put real emphasis on not creating unneeded code, people aren't stupid, they wouldn't do that. Most projects don't live in vacuum, and the difference between overengineered solution and a good solution depends on outside requirements, and that's the job of the team member to keep it on track. I could have been talking about the team leader in this case. They are fallible people too. It wouldn't be unheard of for one to to introduce Kafka, Elastic Search, React, etc, when it wasn't needed...for reasons unrelated to the task at hand. As mentioned, one reason would be to gain experience in it, or to get it pre-approved for the team to use...in some future more appropriate situation. > lack of experience Some people have 10 years of coding experience. Others have 1 year of experience 10 times. He is experienced developer with great coding skills Ask the person to implement "Hello world" and then count the lines of code, number of classes, inheritance depth, number of external dependencies, etc. (=<`#9]~6ZY32Vx/4Rs+0No-&Jk)"Fh}|Bcy?`=*z]Kw%oG4UUS0/@-ejc(:'8dc Thats plain java-cism there. I think you're probably better off avoiding a trap or a trick here, and instead just ask about it. "Give me an example of a time you over-engineered something" and "Give me an example of a time you under-engineered something". Most people should have reasonable answers to both, and the one they have the most trouble with is probably the one you should be more concerned about. I think you need to dig and see what the aspirations of the system were vs what was delivered. The people I've worked with who always went for the complex solution could list off a million "awesome" things that the system was going to do. It rarely did any of them. I'd also look for people who are constantly starting from scratch. The complex system builders I've known are _terrible_ at maintaining systems. So, they throw them away every 12 to 18 months and start over. If you think they're complecting too much, ask them how they supported one of their creations over _years_. Lastly, ask them what customers thought of the result. Look for specifics here. They should give names, use cases, etc... Really, they should understand the business problem their customer needed solved and be able to communicate why the complexity of their solution was necessary to solve the customer's problem. I ask candidates to sketch an API (program, not REST, i.e. an outline of classes, functions and module structure) for a problem similar to those we solve, but smaller in scope. I also ask about things like test coverage and programming paradigms they have an affinity for. Some things I look for that can be negative signals are: * Propensity to choose classes and OO programming in a domain it isn't really suited for, or functional/procedural programming where OO might fit better * Propensity to build for what might be rather than what is (e.g. always have an abstract class even if only one concrete class is necessary) * Propensity to maximize test coverage (e.g. splitting the logic of functions into smaller ones just to generate test cases, even when those smaller functions are only called at one location) rather than to design for a solution Actually, the question is related to employee, not a candidate:) He is working with us quite for a long time, and he writes really good code.
But he can't make simple solutions, only the hardest way to solve a problem. And because of it, we can't plan work and estimate tasks. Well that's a different problem. What is the nature of the work? The first thing that pops in my mind is "perhaps he sees an appropriate level of complexity that isn't accounted for by the other engineers, product owners or other business stakeholders." I believe "overengineering" will not show itself through code but through time. Let me explain, a person thinking through a solution that should only take 1 hour and spends an entire day on it is overengineering. What goes through this persons mind goes something like this: "Maybe solution X is better?", "Is my code good enough?", "What happens in area X?", "Maybe add more tests to cover X?" ... You get the point. That's why I always focus on the end result and try to be pragmatic as possible. Would this solution be enough to solve this problem, if yes, then move on. Don't get me wrong, performance wise, it should also be taken in consideration but not that it consumes all your time. I think with overengineerers you have to look at the actual problem they are trying to solve and compare it to the potential problems they are trying to solve. I often do "cool" stuff only to realize that it was way too complex for the problem. But I make a point to trim it down later. I would add a step towards the end of the project that focuses on deleting unused stuff and simplifying as much as possible. As always there is no hard type and you often can't predict at the start if something is a good idea or not. On the flip side people that immediately cry "YAGNI" and never try anything can be a drag too. https://news.ycombinator.com/item?id=14397089 Here's a comment of mine, from a month ago, pertaining to interview coding exercises. My example accomplishes the goal you've described, however I'm not sure it's an exact fit for your situation, given the context you've provided. Perhaps some more information about your particular case would help identify good solutions. Maybe try asking them what they consider overengineering and ask for some examples of times they've fought against an overengineered solution? Also ask about when code duplication is OK. I can't imagine you can gauge this easily otherwise as e.g. code examples are going to be too artificial but you could probably tell from their answers to the former questions if this is an issue they've thought about before. Here's an easy challenge: Ask the overengineer to solve problems without introducing any new classes or functions. This starves the overengineer of oxygen, reducing their abstraction options to "loop", "branch", and "add variable". The resulting giant switch statement in a loop is likely to be a relatively clean architecture. Question: "Would you like a lot of stock options?". Show them you are serious about engaging business-minded developers. "How would you build a to-do app for yourself?" Because it seems that todo apps are incredibly susceptible to overengineering? If it's NodeJS the fastest way is to check how many dependencies they have in total, bad developers will find a way to include 100s or 1000s of dependencies, anonymous third-party blobs of code running within their application: Don't agree with the approach because using a proven solution can be superior and net a simpler code base than rolling your own. Fyi, by using webpack, couple loaders, babel, etc I net 100s of dependencies just for the build. Ymmv. I think this is right. I might as how they start the dev environment or kick off a build. Should be something like `yarn start` and `yarn build`. We all work with really complex systems. The trick is to abstract them to the point where they're simple to understand and use. 1000s of dependencies doesn't even have to be complex at all if you use simple patterns to manage those dependencies.
$ perl -wle "print 'hello world'"
$ echo "hello world"
"Hello World"
echo -e "#include <stdio.h>\nvoid main() {printf(\"Hello world\\\n\");}" | gcc -o ./foo -xc - && ./foo && rm foo
echo "Hello world!" > hello.php
php hello.php
Another indicator is how elaborate their build process to munge w/e they wrote into w/e a machine understands. $ npm install