Settings

Theme

Fear of over-engineering has killed engineering altogether

fika.bar

98 points by masylum a year ago · 84 comments

Reader

klodolph a year ago

> Before the 2000s, academics ruled computer science. They tried to understand what "engineering" meant for programming. They borrowed practices from other fields. Like dividing architects from practitioners and managing waterfall projects with rigorous planning.

I don’t think this is anywhere close to an accurate account of history.

If you look at the history of “waterfall model” then you find Royce (1970), and if you dig back farther you find Bennington (1956). From their writings, it sounds like people understood how bad the waterfall model was, even back then. The waterfall model primarily shows up as an example of what to avoid.

> Developers must ship, ship, and ship, but, sir, please don’t bother them; let them cook!

My explanation for this is that corporations are just really bad at building incentives for long-term thinking. The developers who ship, ship, and ship get promoted and move up, and now they’re part of the leadership culture at the company. The right incentives are not in place because the right incentives are too difficult—we want nice, easy-to-measure metrics to judge employee performance. Shipping features is a nice metric, and if your features move the needle on other metrics (engagement), then so much the better. You get retained, you get promoted, because you gave management a nice little present full of data on why you’re a good employee, wrapped up with a bow.

The reason that managers want nice metrics is because they want to avoid being blamed. Managers want to avoid being blamed for the wrong decision more than they want to make the right decision.

The way to counteract it is to cultivate trust. With trust, you can work on other things besides avoiding blame. When you’re working on other things besides avoiding blame, you can take the long-term view. When you take the long-term view, you can advocate for employees that fix problems and give them resources.

  • colechristensen a year ago

    "waterfall" is just a pejorative term for the kind of planning represented by a Gantt chart, which kinda looks like a waterfall.

    This kind of planning is _necessary_ for traditional engineering projects where certain steps of the process can take months or years and requires scheduling of many resources.

    Purely software projects usually don't need that because if your tools are good you can ship a completely new version in minutes to hours instead of months to years. You also get very easy do-overs if there are mistakes which you don't get if you're building a bridge or a factory.

    • weinzierl a year ago

      That you can ship quickly is not really relevant though. Case in point is that you most often cannot change the architecture midways without permanent damage. That's why many software projects architecture looks like a building where the architect completely changed their mind after every floor, why everyone complains about technical debt and projects are often thrown away after a couuple of years and rewritten from scratch.

      I'm not saying agile has no place, just that many projects would benefit from a little bit more foresight.

      Also, I think successful agile projects are often those where the long term planning is there, even if it is not obvious or talked about a lot.

      • colechristensen a year ago

        >I'm not saying agile has no place, just that many projects would benefit from a little bit more foresight.

        There is really a spectrum of choices for how much you plan and tradeoffs with each level.

        Success is finding the right balance for you, your team, and your product.

    • ethbr1 a year ago

      The phrasing I once heard (source sadly lost to my memory)

      "If the physical properties of concrete changed every 6 months, then structural engineering practices would look very different."

      But that's the world most software is developed and lives in.

      • kabdib a year ago

        "This avionics firmware is absolutely terrible."

        "Well, last week it was driving a bus."

        • erik_seaberg a year ago

          I'm reminded of a build break in Google web search because of a dependency on the self-driving car project. And that's how Bazel rules were given visibility lists.

    • janpieterz a year ago

      Waterfall in software development was different than a Gantt chart based "project that was managed" like the construction of a bridge (an example close to my house). They added new requirements to this bridge (like they wanted the surrounding grounds to be maintained nicely, so added irrigation and gardening etc). But no new requirements were added to the foundational "here's a bridge and it needs to cross this stream from this point to that point" (high level).

      Waterfall in software was continuously changing those fundamental parts. You'd be fine tuning the irrigation and needed to destruct the whole bridge because some new requirements came up. That's where the anti-Gantt and "bad waterfall" comes from, the reality is the software world as a whole moves way quicker and requirements are way more malleable, subjective than building a bridge.

      • colechristensen a year ago

        >Waterfall in software was continuously changing those fundamental parts. You'd be fine tuning the irrigation and needed to destruct the whole bridge because some new requirements came up.

        Exactly this same thing also happens in non-software "waterfall" projects as well. Mid-project fundamental requirements changes which result in having to re-engineer large parts of the project. This is one of the reasons military acquisitions are so expensive, there's a huge problem with requirements changes.

        • com a year ago

          And thus us why large-scale construction companies can bid so low, because most of their revenue and profit come from the inevitable scope and requirements changes later in the projects.

        • janpieterz a year ago

          Perhaps to some degree, but I've never seen a skyscraper that was almost done being torn down to be re-architected and rebuilt.

    • marcosdumay a year ago

      > This kind of planning is _necessary_ for traditional engineering projects

      People keep repeating that, without any empirical evidence at all. If it were only software developers justifying a body of knowledge, it would be ok, but all kinds of people keep repeating that.

      Yet, that kind of planning predictably fails every single time in every single engineering practice.

      I can guarantee you, if an engineer designs the structure of a bridge, send it to a crew to construct, and go away to only test after it's done, the bridge will fall down before it's even built.

      • bluGill a year ago

        the fault is not allowing time to test and fix in the right places not in wanting a schedule.

        • marcosdumay a year ago

          The schedule also does never work anyway.

          Passing a design down and going away doesn't work. Scheduling tasks before you star them doesn't work. Testing only at the end doesn't work.

          The one thing civil engineering has that resembles PMBoK is designing things beforehand. And only civil engineering does that, only because they can't prototype.

          • colechristensen a year ago

            The benefit of a Gantt chart isn't meeting deadlines, it is organizing work. You have all of the steps and what each of them depends on and a timeline. If something slips you know how to update your estimates for everything downstream. If several things are at risk you can see which ones have more critical time constraints and allocate resources effectively.

            • marcosdumay a year ago

              > If several things are at risk you can see which ones have more critical time constraints and allocate resources effectively

              Hum... Have you used them that way? And have you needed a project design to actually discover risks and estimate that you need to act on them?

              Or better, have you found any unexpected risk on a project design, and seen it materialize on practice?

              > If something slips you know how to update your estimates for everything downstream.

              And then your renewed estimates will be wrong again. No idea if more or less wrong than before your revision though. It can go either way.

              • colechristensen a year ago

                >Hum... Have you used them that way? And have you needed a project design to actually discover risks and estimate that you need to act on them?

                Yes. These things are all over the place in defense.

                One of the most important exercises in doing a Gantt chart is finding the "critical path" which is the set of tasks which can't slip without effecting the final timeline. If step two has two separate tasks that step 3 depends on, the longer task is on the critical path, the shorter is not because the shorter task can be delayed at least some without affecting when step 3 starts.

                As the project continues you update the chart with what actually happened vs. your plans. The goal is to minimize changes, sometimes you can, sometimes you can't, but when there are changes you can see how the whole flow is affected and you can easily communicate the change in deadlines, changes in budgets, you can pull people off of one task and put them on another, etc.

                In agile you do way less of this planning and the path is usually much more simple so you don't _need_ these tools to be successful.

          • bluGill a year ago

            civial engineerin, is often small variation on every other structrure. as suchiit is easy to estimate as things didn't change. when things are very different is when they overrun schedules.

    • supportengineer a year ago

      > "you can ship a completely new version in minutes to hours"

      Not all that long ago, software was distributed on physical media such as floppy discs and CD-ROMS.

  • RaftPeople a year ago

    > I don’t think this is anywhere close to an accurate account of history.

    This was my thought also. I've been building software systems since the early 80's and I've never been part of a project that looked like waterfall, but I've also been in small to midsize companies, not large.

    The approach on projects has been somewhat consistent for my entire career which is just a pragmatic look at the project and approach the problem based on the nature of the specific project. There are some projects or parts of projects that really require up front design and planning (e.g. when coordinating changes in multiple external systems that must all play nicely together) and some other parts of projects where the best approach is to iterate (e.g. we've never done something like this before and can't use our experience to guide us very much so we need to try some stuff). Projects are typically a combination of all of the above and more.

    When I originally read the agile manifesto my thought was "ugh, ya, this is all pretty normal common sense, sounds like these guys must be working in some large corporations where the bureaucracy has taken over and they are trying to push back."

  • fny a year ago

    This reminds me of a post[0] where 3 junior devs had written a spaghetti monster that generated 20 million dollars annually, and OP was crying for a rewrite while everyone on HN chuckled.

    "Long-term thinking"--often times, you ain't gonna need it. "Engineering" also rhymes with "bike shedding".

    [0]: https://news.ycombinator.com/item?id=32883596

  • turtle_heck a year ago

    > corporations are just really bad at building incentives for long-term thinking.

    Most corporations are really bad at long-term anything. CEOs want every quarter to be more profitable than the last, mostly everything else is irrelevant.

  • supportengineer a year ago

    The pay for developers is also decidedly non-linear compared to ability levels. Promotions are famously non-deterministic.

    Hordes of $300k developers trying to become $500k developers who are trying to become $1M developers.

  • ramoneguru a year ago

    100% agree with the "building for trust". The only problem is we tend to hire based solely on technical skill. I don't recall an interviewer ever asking me something like, "how do you build a team that values trust and what does trust mean to you?"

    Even when I joined the team, we never discussed trust and what we can do to be a more trustworthy team. It was always centered around how can we be more efficient in our deliverables.

    This is the video I usually reference: https://www.youtube.com/watch?v=kJdXjtSnZTI

  • eastbound a year ago

    In my startup, for all the future devspeed promised by developers who pretend to know how to plan, after 4 years we sure lack to see their features.

    I don’t know, it feels like customers buy our product because has features, but who doesn’t like a good K8s stack.

    The market never seems to end producing those developers who spend their life refactoring and never shipping anything, nor taking responsibility for the fact that their pile of technologies SLOWS DOWN the other developers while never solving the problems they were implemented for.

    It’s not a manager problem. Developers don’t like taking responsibility for their mistakes.

    • ethbr1 a year ago

      Related: I don't know why software estimation doesn't spend more time on risk and less time on estimation.

      How much risk (to delivery) is left out there? What can we do to reduce that risk?

      It's a known thing, but seems rarely used in software shops. https://en.m.wikipedia.org/wiki/Spiral_model

      Who cares if 97% of the code is complete, but the remaining 3% is a critical feature, with unclear requirements, that's never been prototyped, and has no test data?

      • lemonwaterlime a year ago

        The challenge is the mindset in most software shops of "We're not building a space ship". Because of the ability to rapidly iterate, there's a sense of invincibility. As we've seen, however, some aspects of the product should in fact be treated like building a space ship. For instance, with something like properly managing data (so you don't get breached), a detailed risk assessment and robust strategy should be more highly prioritized. For displaying a UI to end users, perhaps not so much, but even then there are accessibility concerns that a good product would address.

        • ethbr1 a year ago

          Safety/security risk is important too, but I was suggesting development keep an eye on delivery risk.

          I.e. the risk that a project won't be delivered with the expected features and by the expected end date

          Fundamentally, time estimates are useless without a coupled risk/certainty metric.

          If you ask me how long something is going to take, and I believe it'll take somewhere between 3 months (worst case) and 2 weeks (best case), what number do I reply to you with? And how do you interpret that number?

          And how does my reporting change as we get closer to delivery?

          Reducing the size of work being estimated band-aids the problem, by shrinking the absolute magnitude of the bounds, but doesn't fundamentally solve it.

          Common heuristics like "double how long you think it's going to take" are other ways this appears.

          • lemonwaterlime a year ago

            I think we’re in agreement. In my mind, you can even get delivery risk from a safety/security standpoint. As in, the product should be developed on time while accounting for the security/safety stuff. And you get there by placing risk front and center for the entire system from the beginning.

    • klodolph a year ago

      > It’s not a manager problem. Developers don’t like taking responsibility for their mistakes.

      I’m glad you agree on both of these points. Like I said, it’s a problem with building the right incentives, and a problem with people who are trying to avoid blame.

      A developer with a good, long-term view has a light touch. On one side, you have reckless developers who ship out buggy products that cause problems for operations and customer trust. On the other side, you have fastidious developers who just keep refactoring everything and never ship features.

      Sometimes, you find a team that can do both. When I’ve seen this, the team has been made of both types of developers. A mix of skill sets, and a high level of trust that they won’t be blamed for failures (outages, missed deadlines). Your developers who move fast and break things will take the time to write a few tests and break things less. Your fastidious developers will relax their standards a little bit and push out code faster, knowing that their contributions are still valued.

    • advael a year ago

      Of course prioritizing shipping more features is going to ship more features. The whole point of prioritizing engineering is to trade off some speed for other things, like your shit not breaking. The incentives in most companies produce catastrophic failures because the mechanisms for punishing individual failure on speed metrics (the easiest metrics and thus the only metrics most organizations know how to or care to measure) are powerful and numerous but the organization-level incentives don't adequately punish the kind of catastrophic failure we are seeing constantly. That cost is externalized from the organization that failed. Their customers and those customers' customers and anyone affected by the whole world being made of brittle nonsense pay the price and have meager recourse if any. If your reason for prioritizing correctness over speed is because you believe that this will eventually produce more speed, I don't know what to tell you. That's possible and a really nice outcome when it happens, but it's not the point of doing it and it's not the typical benefit

      • klodolph a year ago

        Here’s how I’ve seen this play out at some larger, more bureaucratic companies. (This is just an example, it doesn’t have to play out this way.)

        You start by moving fast and putting out features. Then something breaks in a high-profile way and the hammer comes down. The environment changes and people start avoiding blame. Some of the effects I’ve seen:

        - You bring more people into meetings (diffuses the blame when something goes wrong, but slows everything down)

        - You demand more detailed requirements (makes development more predictable, but good designs tend to evolve and do a lot of discovery)

        - Tedious work doesn’t get automated, and your headcount requirements go up (because tedious work is predictable, and automation is a risky project)

        Eventually, somebody in senior leadership (hey, I said “larger company”) notices that the entire org has basically slowed to a crawl and is constantly demanding more headcount, and they have to go in and fix things. Or they decide to axe a bunch of projects, or reorg, or something else.

        I want to emphasize that this is a possible way that the story plays out, not that the story always plays out this way.

      • MereInterest a year ago

        > Of course prioritizing shipping more features is going to ship more features

        I mean, I'm not entirely sure on that. Prioritizing shipping features may make a codebase be impossible to understand, so that any additional features take longer and longer to implement. Prioritizing design may make a codebase be easier to understand, such that each new feature can be added with breaking existing functionality.

        There's a quote I like from Jonathan Livingston Seagull: "It's strange. The gulls who scorn perfection for the sake of travel go nowhere, slowly. Those who put aside travel for the sake of perfection go anywhere, instantly."

    • llm_trw a year ago

      >The market never seems to end producing those developers who spend their life refactoring and never shipping anything, nor taking responsibility for the fact that their pile of technologies SLOWS DOWN the other developers while never solving the problems they were implemented for.

      The market absolutely produces those developers, it's just that their salaries start at 7 figures and go up from there.

skeeter2020 a year ago

This opinion piece paints a pretty limited perspective as the defacto state, which I don't think is really true. From my perspective programming (in the vast majority of situations) is neither engineering nor computer science. The creations are not particularly complex, at least not in their initial manifestations where something like formal verification would help. Even the assembly patterns are not unique; the differences can only be determined by actually some form of build-test cycle. There are a lot more non-traditional developers in the world, by which I mean not comp sci or engineering (or uni) grads, which is maybe what the author interprets as "YOLO". I think that on the whole this is a really good thing, and even as a formally trained student in the area don't agree we're in some sort engineering desert because of the academics that came before us.

  • soulofmischief a year ago

    There's also this cycle of complexity most programmers go through.

    For example, I remember when I was told patterns were important, the time I put in memorizing, implementing and reviewing all these patterns, reading GoF, etc... Now, I'd just like something that works. Patterns are great for recognizing how code is intended to function, but tests are still the only way to verify the implementation of nontrivial code. And the smaller each contained unit of logic is, the more testable. So modularity and sensible, predictable interfaces are all you need.

    I used to overengineer a lot more in my younger days. Now I'm just smarter about writing extensible code up front and improving it as I go.

    • klodolph a year ago

      There’s a percentage of young developers who, right after learning to program, lean hard into patterns and GoF. You end up with architecture astronaut code pretty quickly. Some of these programmers are still unskilled and they are just adding a ton of complexity to a codebase that doesn’t even work yet.

      That said, I think the GoF stuff is mostly “stuff that works”, but…

      1. A lot of it is specific to the language, or certain languages, and

      2. Most of the patterns are useful only rarely.

      But there are exceptions. We’re just blind to them. We use the Command pattern for UI programs because it’s a sane way to implement undo/redo. We use the Factory pattern all over the place. YMMV.

      The mistake these young developers make is that they use “does this use patterns?” as a proxy for “is this good code?”

      • bgribble a year ago

        I was tech lead on a team with a guy like this. We sat down and had a long discussion and whiteboard sesh about a piece of code that could/should have been about 500 lines of code in half a dozen files. Walked out of it thinking that he understood the assignment and I should just step back and let him code it up.

        He came back a week later with something that was about 20k lines of Node.js, all just abstraction piled on abstraction piled on indirection, hundreds of files with like one line of code in each one plus a bunch of imports. "Separation of concerns" taken to the far extreme. 100% statement coverage by the horrible kinds of unit tests that you have to write to get 100% coverage.

        TBH I feel lucky I came out of that episode with my job, and it wasn't even my code!

    • llm_trw a year ago

      >but tests are still the only way to verify the implementation of nontrivial code

      But who tests your tests?

      • asdefghyk a year ago

        Your actual test cases would be review for suitability and coverage of the change. ( that's how we worked in my company, If our software had failed , damage could be in the millions - a lot less than Crowdstrike )

      • jghn a year ago

        the mutation tester

  • giraffe_lady a year ago

    > The creations are not particularly complex, at least not in their initial manifestations where something like formal verification would help. Even the assembly patterns are not unique

    Utility shafts and traffic lights are not particularly complex either, and are built using off the shelf parts. Nonetheless they make up the daily work of tens of thousands of civil engineers, and what they are doing is engineering.

    It's engineering because at its core it is a practice of tradeoffs. Building the strongest possible bridge for the sake of it is research or maybe art, building a strong enough bridge within budget is engineering.

    Applying known solutions to familiar problems is engineering, whether the project is a retention basin or a payment system.

  • mmcgaha a year ago

    To construct a building an engineer or architect may be involved but a carpenter is always required. Sometimes our software is a large public building and sometimes our software is a shed in the dooryard. It is important to know the difference.

csours a year ago

The big problem is "How do I connect the money to the work". In large corporations, this becomes project -> plan -> work. The project gets a budget based on the plan, then you do the work based on the plan.

The problem is the link between plan and work. As you work, you learn. That is the primary activity of software development. Learning is a menace to planning. As you learn, you have to replan, but your budget was based on the original project plan.

You can talk about engineering and culture and whatever you want, but if you're working for money, the problem remains of connecting the work to money and the money to the work.

I'm reminded of the Oxygen Catastrophe - https://en.wikipedia.org/wiki/Great_Oxidation_Event - we need oxygen to live, but it also kills.

ChrisMarshallNY a year ago

There are ways to do "JiT" engineering, and evolutionary design.

However, they generally rely on the practitioner being both skilled, and experienced.

Since the tech industry is obsessed with hiring inexperienced, minimally-skilled devs, it's unlikely to end well.

  • ethbr1 a year ago

    Imho, that's why you've seen SRE come into its own as a full discipline.

    At its best, its not only an operational but an architectural older-sibling to immature software devs.

    Devs get to go-fast, SRE plays adult and keeps everything on the rails.

mfer a year ago

It's more than the fear of over-engineering.

For example, with startups the time to market, pivots, and not owning your decisions long term (which often happens) leads people to move fast and not consider consequences.

It's about goals and following the money. If a bridge fails there's significant legal liability, guilt over lost lives, and more. If software doesn't scale is can be rewritten or a company is hacked and customer info gets out there is a marketing black eye. It's different.

I say this as a classically trained engineer who thinks more engineering needs to be layered into software development. We need to justify it to the business.

  • zelphirkalt a year ago

    With long term probably being longer than 2y, it will be pretty difficult to make most of software engineers do that (owning their decisions), because many are at a new job every 2y or even shorter time. Most do not see how the results of their work pan out in the end. Also it is difficult to own it, if you got no comparison. Say you build some system in way A. You are likely never going to get to build it in way B. This will forever stand as an argument, that you cannot know what would have happened had you gone the way B route.

    So even if some decisions cause a business to need to hire additional people, just to keep that bad decision alive, the business usually does not stop and take a step back, to think "Wait a moment, if we hadn't done that, could we have avoided all this time spent on XYZ?" and then correct itself.

asdefghyk a year ago

Its OK to move fast if the cost of failure is very small.

I do not understand why the Crowdstrike change was not tested appropriately, why this problem not found in testing. My company has a automated test suite that takes some time ( several hours ) to run along with manual tests before any software is released. If its a risky change needs to be reviewed another developer. If it is a emergency production change the testing is much less , however change is still reviews by an experienced developer and still manually tested by a tester. The regression tests are not run...

simonw a year ago

The title of this piece is almost unrelated to the content. The post itself is about using napkin-math to estimate things like how much disk space will be needed for a feature.

  • serial_dev a year ago

    I was ready to jump in with my hot take, but I convinced myself to read the article first.

    After reading I had to scroll back to the title to see if maybe I clicked on a different article.

  • chasd00 a year ago

    i read the article and had the same conclusion, i was expecting "over-engineering" in the solution complexity sense, the article is more like market analysis and initial capacity planning.

hintymad a year ago

Maybe fear killed engineering. Even in a decently managed company, I can see so many engineering effort gets stalled, compromised, bastardized, or killed. Like your manager asks you to get sign-offs from 10 different orgs. Like you want to do a POC, yet somehow an infra team demanded that you integrate with their system even though your targeted users didn't give a shit. Like you just want to submit a workflow to Temporal Cloud yet a product manager demands that you create an abstraction because he is afraid of X and Y. Like you just want to serve 5TB of data, and you got sucked into this: https://www.youtube.com/watch?v=3t6L-FlfeaI.

I'm not sure how companies battle this kind of fear. It looks Amazon's Working Backwards, Netflix's Freedom and Responsibility, Uber's Let Builders Build more or less can counter such fear, but ultimately it's about finding the right people who have good sense of product and who strive to make progress.

asdefghyk a year ago

Also the Crowdstrike change could have been largely avoided, if incremental release to customers was used. Not release to the whole world at once.

For example , release change to some group customers, say 5000, if thats OK release to another group of larger customers.

There was no planning for a failed update. There should have been a mechanism for auto rollback if problems encountered. To me this is 101, very basic

  • asdefghyk a year ago

    Another change that could/should have been possible for the customers , is not to have auto installs. the install would be tested by the customer first before being deployed to the company's PCs ....

    • astrange a year ago

      A virus definition file isn't very useful unless it can instantly update. The virus doesn't ask before it installs itself.

ethbr1 a year ago

Nice perspective! And appreciate an actual use case of what the author is recommending being attached.

In my experience, a lot of ills in software come from the working set of facts around a particular problem becoming too large for one person to hold.

Then you get Healthcare.gov v1 -- everyone proceeds on incorrect assumptions about what their partners are doing/building, and the resulting system is broken.

As a salve to that problem, napkin-math upper/lower bounds estimation can be incredibly useful.

Especially because in system design "the exact number" is usually less important than "the largest/smallest likely number" and "rate of growth/reduction".

Simplifying things that aren't useful to know in detail (e.g. the exact numbers for author's users) leaves time/mental space for things that are (e.g. if it makes sense to outsource a particular component to SaaS).

zokier a year ago

Calling 90s software development overly rigorous and academic is certainly an interesting take. It is also era where PHP and Perl ruled the world, and C was still domain of cowboy coders instead of language lawyers.

It's only in 00s when any sort of methodology (even if it's agile) starts to get wider recognition, and academic languages like Haskell spark interest. 00s was also the peak era for architecture astronauts, for example JavaEE and C++ Boost were almost completely a 00s products.

The counterreaction for that was the rise of low ceremony stuff like Ruby on Rails, or html5 toning down w3cs overwrought stuff (xhtml), and now the pendulum has been swinging back with typescript or rust as examples

  • Apocryphon a year ago

    I have to wonder if in the '90s was formal verification even used in large corporations other than the ones that were supplying the most critical (and legally scrutinized) hardware- aerospace, automobiles, healthcare devices, etc.

FrustratedMonky a year ago

Complexity is going to exist.

I've seen a lot of projects that try to 'make it simple', engineers go around saying "KISS". Hyper focused on simplifying everything. But they failed.

They only realize later that by simplifying, they have just shoved the complexity into some corner, and never dealt with it head on, and it just corrupts everything.

It's like cleaning house, and you just shove it all in the closet. Does it mean you are really neat? Is your life really simple?

It's like squeezing a water balloon. The complexity is going to bubble out and break somewhere. But you aren't in control of how it breaks.

So, just acknowledge that not everything can be 'simple' and deal with complexity.

tekla a year ago

I was unaware that software devs encompassed all of engineering.

kkfx a year ago

IMVHO the problem lie in "specialization": in the past most technicians of any fields have a certain generic culture and comprehension of the world, these days they are so "specialized" to be unable to see the big picture at CHILDISH levels.

As a result no matter the skills, if you do not know the world you working on it's only by chance if you design something good for such world. Developers MUST KNOW the big picture.

davedx a year ago

Clickbait title, IMO

simpaticoder a year ago

Software is an inherently chaotic space. There are 2^10^6 possible states for a ~10KB program. Computer science is not engineering, just as physics is not mechanical engineering. Both CS and physics have the privilege of working within tiny imaginary systems. Engineers do not. Humans are currently engaged in a privately funded search through an effectively infinite space for the patterns, methods, visualizations, rules-of-thumb, that can produce a binary that meets human expressible boundary conditions. It is natural that some humans quail at this seemingly impossible, slow, arduous journey, and they reject engineering. It is also natural that some humans cling so tightly to a concrete approach that they cannot absorb new models. In a very real sense, as humans select software methods, software methods select humans.

bjornsing a year ago

I’m not sure it’s fear of over-engineering. The biggest difference during my career has been the switch to the “saas model” where software is never done and there is no clear line between development and operations.

  • swatcoder a year ago

    Yes, shrinkwrap software is/was like building a ship. You had to get it seaworthy on the first go and could maybe hope for a little touch-up at some friendly port later on. Neither the process nor materials were perfect, but robustness, resilience, and completeness had to be part of the plan.

    The dominance of continuous deployment and auto-update culture changed it from ship-building into something more like an exquisite corpse or papier mache project, where people just constantly tack new things on, pull old things off, and rush to make repairs as haphazardly as can be gotten away with.

    In a giant software market, they both have their place and both bring upsides/downsides, but they're radically different ways of making software and some of us strongly prefer one approach to the other.

  • zer8k a year ago

    To me it's not that either. SaaS can be done well and has been done well historically.

    The difference between then and now is the MBA-ization of tech. The MBA cancer infests and does the only thing it can do. Create spreadsheets to force people to track time (points) and other stupid metrics. These can then be boiled down to near meaninglessness but make the non-technicals feel like they have control.

    The result is people like me who want to do good engineering can't. You budgeted 40 points for a 70 point project. Congratulations, I have a gun to my head where I can't write good code so I end up fixing it when it eventually causes some level of SEV. That is, if I'm lucky. If I'm not lucky I have to bandaid the bandaid and hope to god I can run out my tenure so I either get promoted out of dealing with it or quit. Only to do it once more at another company.

    If tech companies would be run by engineering, like they used to be, things would not be this way. Non-technical garbage Mckinsey level MBA consultants are the problem. Second to them is completely incompetent project management. Typically these two groups intersect on more than 80% of traits.

    • Apocryphon a year ago

      Meaningless metrics and Goodhart's Law are certainly problems, but this sounds more like an issue of rigid locked-in processes. If an engineer believes that vital changes need to be made, they should be able to argue for it to get it prioritized. Point allocations shouldn't ever be set in stone.

      > If tech companies would be run by engineering, like they used to be, things would not be this way. Non-technical garbage Mckinsey level MBA consultants are the problem. Second to them is completely incompetent project management. Typically these two groups intersect on more than 80% of traits.

      I'm not altogether convinced this is the solution either. Google, famously, is an incredibly engineering-driven company and these days it can't keep a product around to save its life. Engineers aren't necessarily great at project management. I'm not sure if it's MBAs at Google sending Reader to the graveyard, or perhaps engineering management reading the usage tea leaves and making a cold calculated numerical decision rather than considering for other customer factors.

      • astrange a year ago

        Reader wasn't cancelled for business or technical reasons, but it was for a good reason. I can't share it, mostly because I've forgotten too much of the story I was told to repeat it.

        I find it unimpressive how much people focus on it. They don't seem to know there are other RSS readers.

        • Apocryphon a year ago

          There are plenty of other products in the Google Graveyard, though. It's just the one that gets brought up the most.

      • zer8k a year ago

        > I'm not sure if it's MBAs at Google sending Reader to the graveyard, or perhaps engineering management reading the usage tea leaves and making a cold calculated numerical decision rather than considering for other customer factors.

        Who manages those managers? The answer is probably MBAs with spreadsheet driven metrics for cutting projects. Reader in particular was egregious because it very easily could've been productized and sold.

        > Google, famously, is an incredibly engineering-driven companyGoogle, famously, is an incredibly engineering-driven company

        Perhaps it once was. Sundar has next to no experience in the industry he leads. This is not uncommon. A brief look at several of the Google executives now shows a dramatic lack of experience in actual software engineering. Some of them have high-flying CS degrees but that is a meaningless metric.

        Coincidentally in my startup-filled career I have rarely met an executive that was an actual engineer. When I do, they are typically 10-20 years behind current technology (even ignoring the new-framework-a-week nonsense). Actual, hardcore, born in the trenches engineers seem to have their heads held underwater by the sociopathic business school leadership.

        • Apocryphon a year ago

          On the other hand, was engineer Page's second tenure as CEO better than businessman's Schmidt's?

  • Apocryphon a year ago

    All programs are Steam Early Access now.

renewiltord a year ago

Rigorously plan your own company. I won’t. Then we’ll just meet each other in the market. If your thing is so good, people will buy your thing.

readthenotes1 a year ago

TimeSpaceMoney and similar triads leave off one of the more important dimensions, quality. It's the iron tetrahadon

taneq a year ago

Not in any actual engineering field. There it's being killed the traditional way, by 'do what you did last time' and 'meh it'll be fine'.

iancmceachern a year ago

It's common on HN and things that are posted here to think of/refer to Comp Sci and Devs as the whole of engineering.

This title should be:

"Fear of iver-engineering has killed software engineering altogether"

ravenstine a year ago

> Before the 2000s [...] [i]t was bad. Very bad. Projects were always late, too complex, and the engineers were not motivated by their work.

Though I've understood this to be true, it's not a problem unique to that era.

What was perhaps more unique to that era was that there was less room for bad software, and software businesses were more directly impacted by bad software.

I would argue that there's little to no objective evidence that the industry was actually made better by Agile-inspired methodologies. If anything, methodologies served as a means to distribute blame and, incidentally, allow bad software to continue to be written.

This phenomenon probably wouldn't have ended well if it weren't for hardware picking up the slack and the ever decreasing standards users have for their software. Today, everyone I know expects the apps and websites they use to be broken in some way. I know that every single effing day I run into bugs I consider totally unacceptable and baffling. No, I'm not making that up. I'm serious when I say that I run into bad software every day. Yet we've normalized bad software, which begs the question of what these artificial methodologies like SCrUM are actually for.

> To make things worse, engineers took Donald Knuth’s quote “Premature optimization is the root of all evil” and conveniently reinterpreted it as “just ship whatever and fix it later… or not."

People should stop listening to people like Knuth and "Uncle" Bob Martin as gods of programming bestowing commandments unto us.

> I do think the pendulum has gone too far, and there is a sweet spot of engineering practices that are actually very useful. This is the realm of Napkin Math and Fermi Problems.

> Developers must ship, ship, and ship, but, sir, please don’t bother them; let them cook!

I don't think it's a pendulum. This phenomenon is real, but I've just as often seen teams of developers ruled by inner circles of "geniuses" who either never ship anything that valuable themselves or only ship horribly convoluted code meant to "support" the rest of the peon developers.

These issues are less a reaction to something someone like Knuth said and more to do with businesses and teams that make software failing to understand what competence in software engineering actually means. Sure, there's subjectivity to how competence is defined in that domain, but I'll just say that I don't consider either YOLO or geniuses to be a part of that.

> Fermi problems and Napkin Math [...]

I honestly don't get what the author is trying to achieve with the rest of the article. Perhaps that engineers trying to do actual engineering should use math to approach problems? I guess that makes sense as a response to YOLO programming, but effectively just telling people to not YOLO it really doesn't address the organizational problems that prevent actual competent engineering from taking place. People didn't forget to use math; they're disincentivized from doing so because most companies reward "shipping" and big egos.

tomohawk a year ago

This seems like it's aiming at something, but missing.

My take as an engineer (not a PE, but have the degree) is that engineering mindset is quite a bit different than computer science mindset, which is quite a bit different than technician mindset.

Each has their strengths and weaknesses. Engineering is pragmatically applying science. Computer science has more of a theoretical bent to it - less pragmatic. Technicians tend to jump in and get stuff done.

Especially for major work, I'll do paper designs and models. The engineers tend to get it, but the computer scientists tend to argue about optimal or theoretical cases, while the technicians are already banging something out that may or may not solve the problem.

More recently (past 5-10 years), I've seen a notable lack of understanding from new programmers about how to do a design. I'm currently watching a sibling team code themselves into a corner due to lack of adequate design. They'll figure it out in a few months, but they're "making progress" now and have no time to waste.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection