Does the software industry learn?
tomrenner.comPart of me says "of course we learn, look at how many bad practices are at least significantly less common than they were in the olden days". It's quite rare you come across a decently run tech company or team that doesn't use version control, or makes changes live in production without a testing environment, or doesn't use automated tests at all, or doesn't have a deploy process at all beyond 'use FTP'.
10-20 years ago that wasn't really the case, and such practices were much rarer even in more tech savvy teams and organisations.
Same with web development related stuff. People aren't using tables for layout anymore, nor are things like accessibility some sort of completely ignored concept that no one takes seriously. And people do actually use CSS for layout purposes rather than font tags and spacer images and whatever else the days of Geocities style web development had in store.
So it's definitely advanced in many areas.
In the 1980s, GIT didn't exist, and CVS was a complicated thing that Unix people used, so I didn't have "proper" version control. I did, however, have a stash of ZIP files of increasing sequence number on floppy disks.
In the 1990s, We used FTP to send files to web servers because we didn't have WebDAV or any more secure protocols at the time. We used tables because that is what we had.
We had fewer tools than today, it really doesn't amount to any learning, just better tool availability. Knowledge itself seems to have stayed about the same.
...but aren't those tools a representation of the additional knowledge that the industry accumulated over time?
> 10-20 years ago that wasn't really the case
To state the obvious, 10 years ago was 2012. Absolutely all of the things you list were standard practice. 20 years ago was 2002, also same thing (no, git didn't invent version control).
30 years ago I was using version control at my first job, it was expected practice. As was having test coverage. None of these things are new.
Deployment was different, granted. We didn't "deploy to production", we shipped a box of floppies to users.
I think there's a myth that rewriting software is bad, or at least a symptom of badness, and "reusing" software is ideal.
But writing software is really like creating plans: plans for what the machine should do. Of course there is much room for reusing old plans as components of your new plan. But still every plan must be about what is needed at the moment, not about reusing existing plan-components.
In human communications we don't "reuse" old communications much do we? Well maybe we do a bit, using common utterances. But those are more like idioms of the language, not "subroutines".
> I think there's a myth that rewriting software is bad, or at least a symptom of badness, and "reusing" software is ideal.
There's also a weird opposite myth that old software is bad, and urgently needs to be rewritten with newer paradigms.
I'm currently hired to replace a piece of extremely stable '90s software with hundreds of janky eventually consistent microservices because for some reason. That's how software must be written nowadays, and doing it that way makes it better in some unspecified fashion.
It's not at all clear what is broken and what we're trying to improve with this work. But it's top priority.
Haha, yes. My first front end job was to rewrite a totally reliable and working software written in an obscure javascript framework, but still, working pretty well, appreciated by the users, and making tons of money for the company.
Well, AngularJS was such a trending thing that we HAD to rewrite everything in it. For no reason. No new feature. Bonus time : i was the only dev on this project (because i was replacing the only front-end dev who was leaving) and i had not a single idea of how AngularJS worked at the time.
No supervision, no prior experience, no business goal. I'm not proud of the result. But at least i learnt a lot of things (including CoffeeScript, i'm sorry for who took the codebase after me).
> There's also a weird opposite myth that old software is bad, and urgently needs to be rewritten with newer paradigms.
Some of the challenges with old software is the ecosystem around it. Is the language it's written in still supported on modern OS? (Visual FoxPro for example). Is the database it's using still around? What about the libraries? Are they tied to a certain runtime version with no update path? Do you even have access to the source code for these dependencies? Or the application itself? Can it build on a recent machine?
I've seen projects where some infrastructure around an old piece of code was re-written to make it run on modern platforms, but that takes good engineering (some assembly required) that companies often simply can't afford.
My feeling is that rewriting is used as blunt weapon when the developer lacks the inclination or skill to learn what was written before. I’m been guilty of this.
That certainly happens but rewriting also happens because what is already available doesn't get the job done and can't be customised so it does.
One of the curses of modern software development is that so many of the resources we rely on are highly opinionated. We have frameworks and toolkits and scaffolding scripts and proprietary formats and convention-over-configuration to do things that could have been done with flexible, self-contained libraries and open standards.
These opinionated resources are often easier to create but they can cause great harm over time as a small application that got a boost from using them initially grows into a large application with changing requirements. Maybe those requirements can't easily be met if nothing composes and nothing can be adapted and each dependency becomes a fight between the direction its developers choose to take it that might not be right for the application and the need to keep updating because the developers don't want to support older releases of their code with essential bug fixes and security updates any more.
In this kind of environment, it is hardly surprising that some teams prefer to build things in-house even if they are reinventing a few wheels. It is sometimes the only way to retain control of your own code or to regain that control after the technical debt becomes overwhelming.
> lacks the inclination or skill
Or the time. When you've been given a hard one-week deadline to produce something, you can commit to writing something that works with a lot more confidence than you can commit to figuring out somebody else's piece of garbage.
Most rewrites I’ve seen are huge system replacing projects that came over budget and often missed requirements. I’m not saying there are no legitimate reasons for rewrites. But when people bring it up, I think you need to ask a lot of questions.
I've advised engineers to rewrite code before, mostly for resume building to help jumping ship to somewhere better.
If a company doesn't have a stock-based comp and sticks to prevailing wages/CoL there's no incentive to help the company succeed. Especially with non-technical management, the best thing to do for ICs is often to aggressively rewrite (in the language that's trendy right now and that better companies use) and get promoted by showing off a significant output. Then jump to a better company having spent a year training on their stack at the previous company's expenses.
But even if you know what is written, good luck making that PHP 3 website keep up with times.
Or that old C library written for old versions of OS/other libs.
Agree, but you are giving a more extreme example.
Most rewrites I saw was "wow, this Java code from the old team is a mess, let's rewrite it and, in a couple years, the new devs will think the same about our rewrite."
I think you're onto something there. It's hard to understand code. So maybe the best way to accomplish that is to rewrite it. That way you will have inhouse people who understand and can maintain the software your business is running on. Once those people are gone it's time for the new recruits to rewrite it again. It's a continuing process.
...or a tool for CV-drive development.
I suppose if you squint, then rewriting software means: properly reading/understanding the old code.
Sort of.
A lot of software isn't quite like a plan. It's more like a training manual with hundreds of plans for different scenarios and rules for choosing them and how to modify them.
People who write individual plans don't reuse them or believe in reuse. They're useless once the plan needs to change.
People who write metaplans aren't solving problems, they're trying to to solve general classes of problems, like "Manage booking at any business that has things that are booked"
> But those are more like idioms of the language, not "subroutines".
Actually a lot of mechanical manual work we do are subroutines.
Examples
1. Make an omlette (params like: which kitchen or ingredients are derived from context)
2. Change engine oil
3. Take blood sample
From Alan Perlis' Epigrams[0]
> 14. In the long run every program becomes rococo - then rubble.
And;
> 15. Everything should be built top-down, except the first time.
I think these are helping to understand the lifecycle of a project.
A first top down design should be understood as a draft, a direction. With experience these drafts get better, but the act of bottom up programming is necessary to discover details, problems and requirements that inform a more robust and flexible design.
A rewrite can be partial or total. Maybe it is enough to refactor some components, maybe a new design was discovered during the initial iterations, which yields a system that composes and evolves more robustly and works better as a whole.
As a web developer I program user facing applications, so there is always a kind of pressure to have highly domain and feature specific interfaces. Just get it done, move to the next.
But by learning from the Lisp world [1], Out of the Tar Pit [2], software design books [3][4] and the pain of experience, I've come to the conclusion that there is a better way. There is often a general layer that wants to be discovered that we can use to build the needed features. A workflow, framework, DSL or maybe just a bunch of functions or a data model.
This layer is often very hard to get right the first time around, especially if we don't have strong analogies already. It is often more natural to discover it by writing or trying to write the specific code first. Premature abstraction is the root of all evil.
But also [0]:
> 7. It is easier to write an incorrect program than understand a correct one.
We should be brave enough to move forward, while assuming that we're probably wrong. Scientists often practice this principle and try to figure out in what ways they _are_ wrong. I think this is a good way to approach programming too.
[0] http://www.cs.yale.edu/homes/perlis-alan/quotes.html
[1] https://dspace.mit.edu/handle/1721.1/6064
[2] http://curtclifton.net/papers/MoseleyMarks06a.pdf
[3] https://web.stanford.edu/~ouster/cgi-bin/book.php
[4] https://mitpress.mit.edu/books/software-design-flexibility
Given the number of times we have failed to learn the lesson "downloading code from untrusted sources and running it is a bad idea" -- the log4j and NPM colors fiascos spring to mind -- I think it's fair to conclude that this industry is completely incapable of learning anything, ever.
Ignoring the fact that basing ones opinion on an entire industry based on two "fiascos" seems drastic at best, who can we trust if we suddenly can't trust organizations like Apache? Do you trust the Linux Foundation?
It's almost like the issue is not that code is available, but how people use the code that's available, and no one seemingly likes funding open source code.
Everyone seems to be accepting the premise but I’ll reject it. For-pay software has lots of bad security vulnerabilities too. SolarWinds is an example. Windows and Office exploits. Browser 0-days. Etc etc
It’s almost like software is extremely complex and security is very hard in general. You’re always going to make some kind of trade off.
The problem is that we as humans don’t know how to correctly estimate risks like security risks. That means it’s not priced in when you go and ask “should I incorporate software package X into my build??”. “Should I automatically take updates from my upstream?”. There’s no good answers here either. Ultimately you need to be careful about which dependencies you take on and which ones need to be kept up with and which ones should be pinned (but even in the best case scenario issues will occur)
> and no one seemingly likes funding open source code.
I’m not sure how this meme got started but it’s toxic. Why does free software need funding? Free software needs contributions. Big corporations make contributions by paying engineers. Everyone benefits in this ecosystem.
> I’m not sure how this meme got started
How? Look at multitude of projects and see that most people using the software is not contributing back, with either time, money or anything else.
> Why does free software need funding? Free software needs contributions
You're saying the same thing, "contributions" is one way of funding projects, "funding" doesn't just mean money, it also means contributing engineering hours, security audits or any other way of contributing back.
But without any funding (money, time and/or effort), it's really hard to do security audits for example, since it's expertise many developers don't have nor get to educate themselves about on the job.
How is it toxic to see how little everyone who uses open source/free software is contributing back to the projects they use?
Why do most users need to contribute? The value is the ecosystem. The point is we don’t have to contribute to everything we use. We can build on the work of others and they can build on ours.
Even developers of common libraries are relying on an amount of open code so immense they couldn’t possibly make contributions to all of it. This is the beauty of free software.
> We can build on the work of others and they can build on ours.
This only works if both parties publish. Otherwise it's "we can build on the work of others"
Many companies seem to be able to benefit in various ways from contributing to open source / free software.
Examples: - Chromium and Android obviously benefits Google as it makes it easier to ensure adds get through - Also, they limit the ability of Apple/Microsoft to control those revenue streams in their walled gardens - Hardware and software vendors benefit from making sure Linux works well with their products - Making TensorFlow free helps build a community that in turn makes hiring easier. - Contributing to Torch may protect against a monopoly - Contributing to other R or Python machine learning tools may help limit the power of companies like SAS or IBM/SPSS - Similarly, contributions to Postgres/Mongo etc wrestles power away from Oracle, MS (MSSQL) and IBM (DB2). - More of the same: Proton vs DirectX, OpenCL vs Cuda, FidelityFX vs DLSS. When a competitor tries to establish a standard that is either paid for or limited or proprietary in some other way, providing or contributing to open alternatives may be easier to do than to provide a direct proprietary competitor. - DataBricks founders benefit from being part of the creation of Spark, and can get paid for adding further value.
Many of the above are cases where large to huge corporations use their power to disrupt competitors by providing free alternatives in areas where the has some market dominance. Other contributions assist in delivering a basic product for free while getting paid for products that add value on top.
Every individual in the open source community is receiving more benefit than they could ever personally contribute. Keeping score is pointless. The fact is large corporations use open source software and also contribute to the ecosystem, just like anyone else. This is fine.
‘Free’ literally means that you don’t need to contribute back.
If a contribution is required, then it’s not free.
I find that usually there's just one person (or a very small group) with a vision, motivation, and the skills required to take the project in a good direction and keep maintaining it. Contributions from others tend to be fixes and features of limited scope, essentially drive-by contributions, not enough to keep the project going.
Funding would help ensure that those who have the skills and motivation and vision can keep working on it.
It's also a constraint setting problem. No one deploying software with log4j would've said "yeah, the logging system should be able to reach any IP address at all if asked to by external input.
But we lack a decent way to express that sort of data flow constraint when deploying software.
The NPM colors fiasco was something we should have learned not to allow to repeat -- after the left-pad fiasco. The fact that we keep stepping on rakes and getting smacked in the face like that is the problem here.
I dunno what you mean, unmoderated repositories was the deign goal of NPM.
When NPM launched, and to this day, I was among the people voicing preference for the philosophy that goes into maintaining (e.g.) the Debian repositories. But some people want a package source with no gating mechanisms.
Of course there are many options for how and when to gate that lay somewhere between debian's approach and a fully unmoderated one. But when that case was made, I was informed we were old fogies out of touch with the modern pace of development. So as far as I can tell these "fiascos" as you call them are NPM operating exactly as intended.
I'm saying, we should have learned that was a shitty design goal, and put more stringent checks in place to ensure a single upstream developer can't ratfuck literally everyone's Node app, especially since Node has moved beyond being a startup toy and is now critical IT infrastructure for major corporations.
The Go ecosystem is still fucking clownshoes in so many ways, but even they managed to pivot away from "depend directly on whatever random developers barf onto GitHub". The Node ecosystem, by comparison, evinced all the problem awareness of the "this is fine" dog.
I feel like the actual problem behind this is a useful definition of what "trusted" and "untrusted" mean that does not resolve to assigning blame for problems that have already happened.
I feel like "literally any URL supplied by anyone capable of visiting your website" and "some random guy from the Internet, with no connection to you or your company whatsoever, who was recently arrested for trying to burn his own house down" are both fairly obvious examples of sources from which you should not download and run random code without checking it first.
But maybe that's the part that this industry is incapable of learning.
Not only that, but we're now creating devices that depend on remote servers. It's completely obvious what the downsides to this are, yet we embrace it without question.
And why are moving to apps that only work online? Networks are slow compared to desktops. Programming seems to be the art of doing the worst thing possible. Our computers are getting faster and faster, but we're relying on communications that are vastly slower.
And another thing: Windows updates, or even Firefox for that matter. Software shouldn't be so full of problems that you need to constantly update it. Just update it every 2 years. Sure, you get the supposed latest and greatest, but updates are a messy process.
Perhaps the difference between programming and "real" engineering is this: parsimony. In engineering, you have to do more with less. In programming, the attitude seems to be to shovel more spaghetti onto the plate.
Update: I'll add a further point. It's not just parsimony, it's also cost of errors. In physical products, a flawed design that makes it into production is costly. So you have to get it right. With software, you can afford a slap-dash approach. And that's what we see.
> Not only that, but we're now creating devices that depend on remote servers. It's completely obvious what the downsides to this are, yet we embrace it without question.
But there are upsides as well: the devices are usually attached to services for which the device acts as a conduit. If the service is valuable, you can sell devices and keep collecting money after first sale, driving huge margins. Didn't Hackernews post a lot of Fs in the chat for the original BlackBerry service -- one of the first devices of this kind to reach a mass audience -- once it was shut down?
I think it's preposterous to expect that software that has millions of line of code (like Windows or Firefox) is perfect. Software Engineers are humans as well and we do make mistakes.
You could argue that we should adopt the same testing strategy of mission critical pieces of logics where 1 LOC has 1+ LOCs of test code, aim for 100% coverage, ... . But then your windows license would cost 1000s of $.
And about your final remark. What you call real engineering has to cope with well defined requirements and the context is well known. You know beforehand the maximum weight a bridge should be able to support. You know the configuration and properties of the soil where the bridge will be built, and you have good estimates of 1 in 100 year extreme events magnitude.
Software doesn't have that luxury. If we build a web server to handle 1k qps it still is somewhat likely that said server might face spikes of 10k qps. Try to do that with the bridge mentioned above. Forget failing gracefully. Additionally users have few ways to use the bridge. You either cross it by car / bus / on foot. The same web server might face someone trying to send a payload of GBs where you would expect few KBs. That's at least partly why code is messy. The space of possibilities is much greater and we somehow need to write software that still works.
> The space of possibilities is much greater and we somehow need to write software that still works.
But the cost of failure is much lower, and that's why we as an industry can be so mediocre.
Physical engineering organizations have profit margins in the single digits to low tens. Software has margins in the 50s to 80s and marginal unit costs of zero. If any physical engineering organization had employees of the skill of our current SaaS software market, they'd be out of business immediately because no other industry but software can absorb such frequent and dramatic failures.
Not to be too critical, but I feel like this article was a bit naive. Nothing works "optimally" and no-one knows the answers to lots of these questions. IMO its a great thing that people are out there trying new things and sometimes good practices stick around. We need the vanguard to be there so that we explore.
"order for learning to most effectively improve our state of operation"
Things don't move "most effectively" in anything. I guarantee that other engineering disciplines do not react to new tech / innovation in any sort of optimal way. People are driven to work in certain ways by all kinds of things, and mostly has to do with incentives and social reasons.
"But I very rarely see articles looking back at past languages or technological fads and looking at current trends through that lens."
There are historical surveys all over the place, even on this site.
I don't want to attack this article too hard, and the motivation to ask whether or not we are are really learning from our experiences is good, I just feel like people are more flustered by the fact that there aren't magical best practices you can memorize and apply yet, and that most human knowledge comes from experience and intuition.
We live in the age of shipping. Immediately. Not in the age of carefully considered best practices, architectural considerations, or sound engineering. Even top notch companies ship broken software, so it's not even about money. It's about speed.
The problem is not with developers, it's in crappy foundational computing layers.
NPM is somewhat the result of JavaScript itself being so bare bones. The cost of this community-based pseudo standard library is enormous: fragile, insecure, unstable. This doesn't mean we have to drop it, it means perhaps we do need a JavaScript standard library. Possibly mixed with NPM vendor packages that take responsibility of large clusters of functionality as a single dependency that is maintained over time.
Almost all software now requires very frequent security updates and we even dispose physical products like smartphones because they no longer get security updates. Instead of accepting this reality, why is nobody working on a foundational software stack that isn't this damn insecure to begin with?
Almost all websites on the web do not meet accessibility standards. And then we beat developers over the head in needing to learn proper accessibility best practices. Which doesn't work. Progress is near zero. So we might as well ask: why are our tools so primitive and bad, why do they not lead to accessible UX by default?
We complain about performance, or the lack thereof. But the problem is mostly in the stack itself, crappy and slow abstraction layers.
I could go on, but I hope you catch my drift. In development, you need to take the lowest common denominator. An average "bread programmer". We expect this person to personally dodge the many gaps in our crappy computing layers and it's far too easy to go wrong. This single individual is expected to be a top engineer, architect, security expert, performance expert, accessibility expert, and so on. And they also need to ship tomorrow.
We basically expect 28 million software developers to all be super heroes. They're not, they keep making the same mistakes not because they suck, instead because we have computing layers that suck. No amount of "awareness" will fix that.
I would also like to read a well-written analysis of COBOL, what it did right and what it did wrong.
A problem the article doesn't mention, is that the percentage of still-programming developers who are young (<30 years) is higher than in other professions, for several reasons, so the lessons learned 15 years ago are often lost.
The few I know is the language tried to be so easy to read by anybody that it made creating abstraction impossible. If the problem fits cobol, you get a nanoSQL like DSL to manage on disk records nicely. If it doesn't (controlling multiple sub systems, ad-hoc state, graphs) it becomes a monstrosity. Over the years IBM and others tried to fix that by making tooling or having nicer languages transpiling down to COBOL, which lead to ever bigger monstrosity. I was shown old code base blending manual code + generated code + manual patch of the two.. it was a kind of office prank to let new guys read that for a day.
You see, this sounds really cool. So what kinds of problems fit COBOL? And why did "we" conclude from this that general purpose languages were the solution, rather than building many different languages for specific problems? Did "we" even conclude that, or is that just my impression, 50 years later?
From what I remember of the times, it was the PC that killed COBOL.
COBOL was considered a "serious" language, for "serious" business problems running on "serious" hardware that lived in a server room.
PC's were not "serious", and lived on people's desks, running Lotus123 and other such trivial tools.
Then VB/Delphi/etc came along, and a generation of developers writing "little" applications directly for the desktop (of which I was one, and it was fun). You could consider these as specialist DSL's for user interactive applications. But that was not the general view.
Writing an application in VB was several orders of magnitude cheaper than writing the equivalent in COBOL, and if you squinted hard enough from a non-technical point of view they did the same job. Non-COBOL developers started writing server applications that lived on a PC in the server room for a fraction of the cost of a COBOL application living on a mainframe.
And that was that.
Except, of course, for the large institutions (banks being the obvious example) who had invested millions in developing their core applications in COBOL running on mainframes. The cost of rewriting this for no reason (because the original still works) is prohibitive. Y2K gave them a shock, but also added to the sunk cost. They still run COBOL on mainframes, and still train up new developers in COBOL.
> Writing an application in VB was several orders of magnitude cheaper
Order of magnitude is big deal, can you be more specific, did you mean 2 or 3 orders of magnitude? Or what exactly did you mean.
By this logic if writing an app in VB costs 20$ then COBOL equivalent cost 2000$ or 20 000$ ?
Ok, 20$ seems a bit too cheap, let's see, if developing VB app cost 400$. So COBOL equivalent would have cost either 40 000$ or 400 000$ ? Is that what you mean?
> So COBOL equivalent would have cost either 40 000$ or 400 000$ ? Is that what you mean?
If he didn't mean that, I will say, yes, that is correct. You could do things in VB in a day that teams of people in COBOL would take months to do. Many things in VB are basically impossible to do in COBOL.
For some perspective, the reason why many Medicare reforms in the U.S. are not able to be implemented is because the government is unable to actually modify the software. The code that figures out how to bill Medicare is 50 years old, has 8 million lines of code and 1.5 million lines of assembly [1].
Another high profile failure was when CA couldn't furlough state employees because they couldn't figure out how to update the software (a feature, not a bug, for many).
[1] https://www.programmableweb.com/news/how-usds-modernizing-me...
I worked on the Y2K project for a large infrastructure company in the UK.
I was getting paid £50/hour to write VB code and maintain an Access database for the change control system that the COBOL devs used. They were getting paid around £500/hour, because most of them had been hauled out of retirement, to actually change the COBOL code. There were ~1000 people in the building I was in, around half of them devs, around half of those the COBOL guys. The project lasted ~2 years. So, back of envelope maths: 500x250x40x(52-4)x2=£480,000,000. That was just to pay the COBOL devs to amend the codebase for just the billing system.
So yeah, at least 3 or 4 orders of magnitude, I'd say.
A friends first job out of uni was working for IBM on bank maintenance contracts writing COBOL. He considered it a "useless yet valuable skill."
IMO, society doesn't conclude, it reacts, mostly by imitation and deviation. Among long term cycles
> As a result of how young the profession is, there are few universally accepted practices and standards.
I wish the meme of "software engineering is young" would die. This profession is not younger than any other technical profession, in the modern sense. It was born out of prior practice and is still taught using the same techniques and philosophy. We are not special.
Furthermore, modern engineering is a mid to late 20th century practice - across all domains - not just ECE. We inherited some practices of the first industrial revolution, but by and large the techniques and tradition of education of engineers is consistent across the disciplines and evolved out of the post-WW2 marketplace for new technical products.
It would serve the author to remember this well coupled with the following :
Engineering is the discipline of systematic problem solving. Computer science is the mathematical study of abstraction. Software development is the practice of applying computer science to the domain of engineering.
This results in a force amplifier - much like a lever - but rather than creating torque what we're doing is amplifying the ability of creative problem solving to be applied and re-applied without re-engineering complexity. That abstraction and reuse propagates throughout entire industries in ways that are now invisible because they have become ubiquitous.
"Does the software industry learn?"
Well of course it does. We encode our learning of the problems we face everyday. The chips powering the computer I'm typing this on was designed with software that prototyped and ultimately yielded their design; packaged on circuit boards designed with software that automated hardware engineering checks to validate the design; encased in an enclosure designed with software that encompasses decades of 3D design and visualization technology; all fabricated using hardware and software tools with ambiguity resolved through digital communications that can be seen as a recursion of this paragraph.
It's turtles all the way down.
If you think we aren't learning everyday, propagating that learning, and preserving it for generations to come you do not understand the practices we undertake everyday. There is so much learned technique, practice, and theory encoded into the basis of our everyday lives it now seems so invisible you can blog about it.
Good point that software "industry" is not new, but I would argue that it is not an industry in a literal sense or at least it is a special industry. The special thing about software is that it is governed by the social aspects a lot.
And because of this, software has poor standardization practices.
Engineers in other sectors could prove their qualifications by showing certification diplomas, even education and years of experience would count. In software your qualification would be determined by some random dude and how well you fit his CS fetish of the month and that is it. Experience and education is only important while filtering CVs. If you designed a chip, an aircraft or skyscraper it is immediately impressive. If you built an FX trading platform it means almost nothing if you are a bit skeptical about pair programming.
It also applies the other way round. If project uses microservice architecture, what exactly does that mean? The result would be a wildly different if you compare such projects at FANG and some investment bank. How software is tested and released will be different in any team. And how that affects the poor developer will vary too. What is a good software? We can have an opinion, but there is no formal definition.
And because of this fragmentation and fluidity software as and industry is doing very different things. It will learn in some cases in others it will be incredibly ignorant.
The software industry is young, if not in a literal sense then certainly in a figurative sense.
Other crafts define their professionalism according to a standard baseline defined by governance and credentials. Examples of credentials are licenses and certifications. Examples of other crafts that require credentials: medicine, law, engineering, flying, driving, real estate, information security, networking, food preparation, education, public securities, and many more. When this is brought up in software the result is excuses and whining, hardly an indication of maturity.
The problems with the industries exaggerated immaturity result in profuse subjectivity. There is always a perception, at hiring, that software developers are rare. Perhaps poorly trained or developed less than desired expectations but software developers are not rare. Nonetheless the problem is exaggerated in a cycle of attempts to increase availability among an undesirable candidate pool by abstracting away many challenges inherent to the practice. This is cyclical because instead of bridging a training or development gap it instead breeds entitlement and insecurity that is paid forward. Since there are no industry standards of acceptable practice this cycle becomes a race to the bottom.
This is the very essence of young.
What exactly do you think licensing would provide that a formal quality system couldn't?
Software has a deliverable that can be tested prior to delivery. And there are domains where software has to be developed within a QMS, those domains have quietly built elaborate process and systems dedicated to quality of their software.
Using licensing to ensure quality results is an easier sell in scenarios where such a system can't exist, like when the practitioner doesn't get (m)any retries.
Are we worried something is falling through the cracks of the quality system approach? Are there properties of a deliverable that those systems can't ensure?
Some product domains are so inconsequential to life that it would be a laughable non-starter to write bars for their quality into law. Is a blanket licensing requirement a way to get at practitioners operating in those fields anyway? Because the individuals will be more of a pushover and easier to wage a public campaign against than the company they hypothetically work for?
> What exactly do you think licensing would provide that a formal quality system couldn't?
A uniform minimal accepted standard and ethics. Excuses against licensing generally arise from people who don’t understand it from a lack of experience, an argument from ignorance.
https://www.logicallyfallacious.com/cgi-bin/uy/webpages.cgi?...
They do exist, but I do wish for more. We’re only a couple of decades away from software construction being a hundred years old (and a few from software _engineering_ being a century old) and I think it’d be wonderful to have some encyclopaedia-like resource that says “here’s everything we tried for the first hundred years, and why we do things this way at the end of it”.
For my part, I try to contextualise the way I/my peers work now with the other things we tried over my career, and maybe the decade or two before that which contextualised _my_ junior experience. For example, I run a monthly magazine with a colleague, and in our devops issue we discuss how ideas in DevOps come from manufacturing, from Rapid Application Development, and other inspiration (https://deprogrammaticaipsum.com/play-it-again-sam/). I just recorded a podcast episode where I look at _what_ documentation was considered “comprehensive” at the time of the agile manifesto, and what docs are still useful despite the industry-wide tendency to eschew all prose (https://www.sicpers.info/podcast/episode-47-comprehensive-do...).
I think learning is subjective. The NodeJS world is learning a lot about backend development I guess.
So, disclaimer, I am new to Typescript-NodeJS and the npm world but have a significant background in the JVM / Spring backend world with Kotlin and Java. I am also too young to be this cynic but I am honestly just baffled.
To me the entire NodeJS backend movement seems like they are "Java sucks!" hipsters jumping into the trendy JavaScript ecosystem of the modern web era just to show how cool and progressive they are (???) - all while rediscovering and reinventing backend features that have been invented and in widespread and battle-hardened use for 2-3 decades now.
Wow we are using NestJS, Jest, annotations, a dependency injection module and Typescript... congrats, you just rediscovered type systems, mocking, Maven and Spring Boot as they are very handy for backend development.
All while bloating your fresh hip ecosystem to an even uglier Frankenstein monster than you can find anywhere in the Java enterprise, transpiling back and forth with 10,000 config files to make framework XYZ interact with each other.
And then, JavaScript is running underneath with all its limitations and quirks that will reach feature parity with Kotlin with ES2048. But yea, I guess now you can send JSON natively between systems because it's first-class.
As someone with a significant background in Node.js and a bit of experience in Java I think Node.js is much better in that respect.
Yes, we have things like NestJS, which I refuse to use precisely for the reasons that you point out: it looks like Java Spring and adds a lot of complexity where it is not needed.
I am also painfully aware that TS can be abused to write Java-like code but that's on the developer. However, the abuse notwithstanding, the fact that JS is a simpler language protects it from Java's key problem:
"Because the problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle."
My (limited) exposure to Java was quite painful because I had to deal with layers and layer of inheritance and abstractions that at some point made it very difficult to do what I needed to do. The language itself was fine except that it easily allowed for this sort of architecture.
Also, the advantage of writing the frontend and backend in the same language should not be understated.
Maybe the core issue is that you are just not a good programmer.
I’m being paid to believe otherwise. :)
> My (limited) exposure to Java was quite painful because I had to deal with layers and layer of inheritance and abstractions that at some point made it very difficult to do what I needed to do.
Hey — could you go into more detail about some of the times when inheritance and abstraction in Java were a problem, or when you had to deal with the gorilla issue?
I promise I'm not trying to catch you out or start a language war; as someone who lived in the Java world for a long time, I'm interested in which parts of it people who visited briefly have found annoying or painful to deal with.
It’s not really a language thing. I would never use Java again simply because it is owned by Oracle but I am sure it is possible to use it elegantly. Just that it seems this is not the mainstream way.
So this is going back 5 years or so. We were building an Eclipse plug-in to make an IDE for a scripting language used in our platform, for which we needed to extend another language plugin. I remember there were layers and layers of abstractions, to hook into which I had to do some dirty things. There was a lot of boilerplate that I had to write before I could even start working on what I needed to work on. Logic being scattered across many many classes some with multiple inheritance was not fun.
I also did a similar project in JS for a web-based IDE around the same time and it was just so much simpler to do.
Eclipse itself was not a joy to work with - good idea in principle but so slow and buggy to actually use.
I also got to look at the code of a JS library that some of our Java devs wrote (some sort of RPC plugin framework) and it was simply terrible in exactly the same ways - abstraction abuse.
So basically I get that sometimes this complexity is necessary but it usually isn’t. Like we had a WebUI app built in Java that took nearly a GB of memory to run because it was probably using some framework underneath that was designed for bazillion different use cases all of which were completely irrelevant in this case, but there was no choice to leave them out.
So I really love the simplicity of JS. I particularly love Node.js - it’s really a C++ framework with a JS API but it’s such a joy to use for all network service related things. Add some sensible use of TS (for the type system rather than the additional OO features that pretend it is not JS) and it’s now even better.
No. The burnout rate is sufficient that most people in the software industry have five or fewer years of experience, and very often it's not actually five years of cumulative experience, but five years of the same year of experience, served consecutively.
This five year cycle turns up very often; that seems to be about the length of time that it takes for an idea to form, ride the hype wave, sink into the trough of disillusionment, and then be replaced by a new hot thing pushed by the next generation of fresh, idealistic programmers. Usually the new hot thing is the same basic idea as the hot thing one or two generations before the current hot thing, which always starts off bright-eyed and bushy-tailed before it runs into the wall of leaky abstractions and murderous corner-cases and the stark horror of being used in the real-world for real products. Then nearly all of the trade-offs and short-comings of the older iteration of the idea come back to haunt the new thing, and the gray-haired programmers over 35 who've seen this cycle come and go time after time have a chuckle.
Another story on the front page right now is "how I built a date picker". In 2022, people are still building date pickers. Imagine the progress our field could make were it not for the navel gazing, NIHing and pointless reworking.
Is that navel gazing? Is a carpenter talking about making a table in 2022 navel gazing? Maybe not the sexiest topic but I'm not sure that's the term I would use.
Software engineering isn't equivalent to carpentry, it is equivalent to CAD. So you can ask if designing new tables in Solidworks is a waste of time. And to this, I'd say the answer is not so clear-cut.
I would say it is sometimes a good use of time to build a date-picker as a learning exercise
Designing tables in Solidworks is a great idea if you're trying to learn CAD.
Recently, I spent around 3 hours building a regex to check if a string is a valid date... Usually this shouldn't take more than 30 minutes, but the environment in which it was to run used some form of incomplete postgres with basically no documentation, so it was a long trial-and-error process of figuring out what to do. Now, equipped with this knowledge, it would potentially be great for me to be able to document my process, so that others won't have the same problem, but a) I already wasted way too much time on this and b) there is no good place to document it. The best thing I could think of would be a medium blog post, which is a band-aid.
All that said, I feel like documentation is getting better in newer technologies.
> In 2022, people are still building date pickers.
Automotive engineers in 2022 are still creating new designs of water pumps and alternators. The new design meets some new requirements for the particular application, be they thermal, spatial or cost-related.
The date picker designed in 2022 is to be used with 2022 applications, IDEs, toolchains, etc.
Lookup YouTube on how many people built a chair, desk, or any other basic piece of furniture. In a craft you have to do the basics yourself sometimes.
And yet, those people do not get paid hundreds of thousands of dollars a year, believe their work to be at the height of economic efficiency, or think themselves generally more intelligent than the average person.
Software engineering is not a craft. It is a discipline. If society is paying people these exorbitant sums to be the equivalent of digital Etsy moms, we're getting a raw deal.
I don't know what software engineering is ans I strongly suspect you don't, either. I fact, I think that software engineering is just a term invented to give non-programmers (professors, managers, consultants) a somewhat technical career path. There is certainly no engineering happening when someone creates a new web service or a new device driver. I don't think there's engineering involved in Amazon's latest cloud service or googles latest feature.
Regardless, programming is a craft. And programmers are essentially crafts(wo)man. Everything's handmade, there's only unicates, there's no formal education besides some guild rules (aka coding interviews) and the science of computing which tells you how and why a program works or doesn't but not how you create it.
There has been some research into actual engineering practices for software. But the underlying science often gets ridiculed, not applied, by practical programmers.
> I don't know what software engineering is ans I strongly suspect you don't, either.
I do actually. I am retired now, have 25 years of experience, and a physics degree from a prestigious college. I built control systems for microscopes, founded an industrial control systems company, wrote linux drivers for tuner cards, and built downhole fluid simulation software used on oil rigs. I also did every other "soft" type of software development you can imagine.
But there's tons of rules, if the industry would bother to learn. Comparing software developers 25 years ago when I started to today is embarrassing. The question of the OP is probably not "does the software industry learn?" but rather "is the software industry forgetting things it used to know?"
Every discipline needs to evolve and Crafting is how you evolve it. The fact that the software industry evolves too fast is because the domain it's applied to is very large and thus we are constantly finding out problems that need crafting.
> And yet, those people do not get paid hundreds of thousands of dollars a year, believe their work to be at the height of economic efficiency, or think themselves generally more intelligent than the average person.
Designers can be payed more than a software engineer. And we don't know what they really think but there are plenty who think that they are better than anyone and are doing the society a big favor but people simply don't understand. (You don't have to go to far away from some software industry to find the most egregious examples of those :D)
While I share the sentiment, there are plenty of perfectly valid reasons one might need or want to do this--I can't comment on that particular article as it seems to have moved off the front page.
A few things that could justify rebuilding the wheel as it were:
1. The thing doesn't work for the environment it's being used in (corporate network, obscure mobile browser, screen reader, whatever)
2. The thing doesn't work for non-Gregorian calendars
3. The thing doesn't work consistency across various types of clients (this reason gives me some nostalgic flashbacks to the early days of jQuery)
4. The thing is fine, somebody just wants to see how the sausage is made to learn of it
There's probably other reasons one would want to re-invent a common UI element. It's really hard to say how objectively good or bad a particular attempt is without context and more clarity on what metric it's being judged by.
What's exactly wrong with reworking old things? I find that it is your kind of thinking is what leads to bad software, bad reinventions of the wheel and security issues.
People in our industry need to know how to make the fundamentals from almost-scratch. In electronic engineering I learned how to bias transistors and build RC-filters. Heck, in chemistry class we even learned how diodes and resistors are made. When I studied music I learned scales, chords and rhythm... why should I, if I could just download some WAV loop off the internet, or sample some old song?
I don't see why software has to be special. Haven't we commoditised the profession enough?
Unfortunately the browser implementations of the HTML5 datepicker only make sense for an audience that understands the US date notation. They can't be configured, as far as I know.
What? The html date picker just uses the system date picker. And the value is formatted as yyyy-mm-dd when you submit it, that's not the US date format.
After reading your reply I checked again and it does indeed work as expected on Windows at least, so that is great news to me.
I would be with you, but the default date picker in Android is soooo shit that I agree we need to think about this more.
This is true. Because other software engineers built things, I osmosed their knowledge through the Layer 8 network. All software engineers are linked through the Khala- our sacred union of mind and machine. This ushered in our golden age, the so called Boom of Kim Dotcom.
But a new era has arisen, with a new God from this combined mind. Copilot rises and casts a great shadow on us all.
In past years, children were born with monads in their minds, functors at their fingers. Teens rose from sleep speaking in bytecode and LLVM IR. But now that is lost, mankind’s combined mind is being eaten by the power of Copilot and the children’s fingers only hit Tab.
We should never have known each other so intimately. The risen god of our inbred mind is a dark spiral. Thought itself fades.
Hardly specific to software engineering, and maybe (but not certainly) that should tell us that there is something to navel-gazing and reworking of fundamentals and basics.
If creating furniture was physically as easy as writing software, I think many people would create their own furniture, just to create a chair that fits them exactly.
Similarly in software there is not "one size that perfectly fits everybody". Therefore there is much need for customization.
When you write software you are in effect not only creating the program. You are creating a system that consists of the computer, its program, and its user. The way the task is divided between what the human does and what the computer does can be done in very many ways. In some systems that results in a system where users need to do more but thus also have more options for controlling the system. Therefore there are so many different ways any "applications" can be written.
No program is an island. It is something that must interact with its users. It is always designed to be a part of a bigger system consisting of itself and its users.
..someone really ought to invent The Date Picker to Rule them All. Hm, maybe I'll give it a shot.
Actually, I think that's the reason people build new date pickers (or new anything else). The new (fill in the blank) to rule them all has to take every possible permutation of use cases into account, and as a result, is necessarily "bloated" for whatever purpose you intend to put it to. In most cases, it's not just bloated itself, it also depends on other (fill in the blank)s to rule them all that are themselves bloated, and inconsistently compatible. It's also necessarily way more complex than just annotating an HTML input form element - again because of the huge number of different cases it has to support, so take a while to learn how to use and to learn its own idiosyncrasies.
It will never learn because there is no incentive to do so.
Tech organizations value their managers in terms of their budget, headcount and being in fashion. As such the tech business model has always been about getting "good enough" results through a brute force process, possibly with the latest tech. Expertise/deep knowledge is only added/valued when the development gets stuck.
I think the premise falls down with the example given about programming languages because in engineering everything is a trade-off. Cobol is not Go and would have made design choices based on not just Instituional Knowledge (or lack of it) but also because of the world at that time, the reality of hardware and software etc.
Sure, GoLang could learn something like "don't make things nullable" but it would be easy to say, "yes I know that, but in our case, this creates a benefit because of X"
Just like people who believe in TDD, DDD etc. they might all work, but that doesn't mean they are objectively the best way to do things in all circumstances.
I think the better target is learning how to think and analyze more effectively so we know what we are trading off more clearly.
The “industry” is a bunch of individuals who individually may or may not learn from the experience of others and themselves. And there are a lot more inexperienced developers than experienced. So on average I would say that yes the “industry” is learning but the average skill of developers might not be that high (yet).
Well, after 30 years of software development experience, I seem to be the only person I know who has learned that you have to be able to test everything outside of a production environment.
No, we mainly benefit from operating systems protecting us from bad developers (at the cost of performance). Take a look at all the LSPs on Linux ad an example.
Nope.
“We learn one thing from history: that we learn nothing from history.”
― Georg Hegel
Physics progresses one funeral at a time.
Does it forget?