This is my first article written from my new home in Wellington! If you're a recruiter or a hiring manager and want to bring me on, you'd like to avail yourself of my coaching offerings or you'd like me to speak at an event, please feel free to reach out and we can have a call or meet up for a coffee!
As you can see in the note above, I've recently moved from Hamilton to Wellington, partly in order to reduce the cost of living (an odd thing to say about that particular move, but there you go), partially because Wellington is much more queer-friendly than Hamilton and partly because Hamilton is not great in terms of job opportunities for tech people. While we've only been here a week, I'm already much happier: the library is much better, there's a lot more of a cultural life, I love being by the ocean again and there are already a lot more interesting things to get involved with, both in my community and in the tech industry. Hell, I've already managed to fill my calendar for the next few weeks with interesting tech events to go to: things seem a lot brighter already.
The sheer scale of the change brought to light two niggling but persistent irritations that I have with the tech industry as a whole: the industry is awfully flat and uninterested in fields like aesthetics, the arts, politics or even most natural scientists, which means that I often don't like talking to tech people, and the fact that the tech industry, of all industries, seems unusually prone to falling for fads, poor reasoning and just flat-out making extremely stupid decisions that blow up and hurt everyone. It became obvious, I think, when I moved down and found the problem, if not eliminated, much reduced (I ran into a hiring manager that actually reads long-form content, and even if the job didn't work out, I think that in itself is a sign that we see eye-to-eye much more than a lot of the people I worked with previously). And that got me thinking. It's my contention that these two irritations are outgrowths of the same flaw in our culture, and that's what I'll discuss in this essay.
Technology and engineering are empirical fields
In general you can break down most kinds of knowledge work into two interleaving threads: a rationalist one based on deductive reasoning where you begin with some fundamental things and work forward from there to reach conclusions with the aid of your reason (and implied massive brain), and an empirical, inductive thread that relies mostly on one's ability to carefully observe things about the world and build understanding from what you see. While almost every field of work relies on some combination of both of these, they're seldom relied on in equal proportion, and the field will rely more on one than another. Mathematics and philosophy are perhaps the prototypical rationalist fields: in them, almost all of the work and the progress depends on you a) knowing things and b) thinking really hard about them until you reach interesting and novel conclusions. While you often rely on the reasons of multiple people bouncing off each other, and especially in philosophy you often find yourself relying on empirical observations as raw material, the actual process of producing mathematics or philosophy seems to rely considerably more on deduction and reasoning than it does observation. To do well at these things you need to have a strong mental capacity for following chains of inference, reasoning from first principles, logic... that sort of thing.
On the other end of the scale, music, the fine arts and biology are all fields in which empirical observation plays the critical role. While it's useful to be able to reason deductively in all of these fields, trying to reason about a wrong note or a misshapen drawing deductively is self-evidently a fools' errand: you get better at these fields by carefully observing the subject of study and what you're doing in them, and you build understanding inductively from looking at the total of your experiences and generalising them. If the piece of music that you sang consistently sounds flat when your chest is in a certain position, you try altering the position of the chest and figure out one that helps. Or, for that matter, if you see that tortoises on different islands consistently form similar adaptations to similar ecological conditions, you might conclude that this forms a wider pattern that lots of different kinds of life-forms do. The key skill here is less being able to form or follow a long chain of inference or produce airtight deductive logic and rhetoric than it is to be able to carefully, often over long periods of time, observe the part of the world that you're studying and what you're doing, see or sense commonalities, generalise and come up with heuristics for understanding the total of the experiences that you've had. It's a much more experiential, organic form of learning (which doesn't mean that it's not scientific, despite the best efforts of tech bros to claim that biology isn't actually science).
This difference in ways of thinking and reasoning results in a core conflict in the tech industry and community. The act of writing computer code to make a computer do things is very much a rationalist field (it is arguably a branch of discrete mathematics, after all): the skills you're using are reasoning through chains of function calls to get an initial set of data into the end state that you want it. It's figuring out carefully how to work out a mathematics problem in the least possible amount of time (attempting to optimise an algorithm in the empirical mode would likely be extremely painful) or how to minimise the space a program takes, and it's doing things like working out how to invert a binary tree: all very solid, very careful rationalist thinking. As we tend to train our software engineers and systems people in coding first and foremost, our industry naturally develops a predisposition to thinking in this way. And it isn't always wrong: the tech industry does have significant components where the rationalist mode of thought is the correct mode to take. The wider process of using computers to build systems that do useful things in the world and then maintaining those systems, however, requires practising software engineers and tech people to function mostly in the empirical mode.
To give an example of this in practice, let's look at a simple example: a build has started failing after a dependency was updated in a relatively complex software project that our engineer (let's call her Elspeth, because why not?) works on. Elspeth first hears about this build failure at a stand-up at her company on Monday, when she's assigned the ticket for the bug. Her first task is already empirical: she needs to carefully read the ticket, understand what it's asking and the context in which it exists, and decide whether she has enough information to start or not. If she doesn't have enough information, she'll have to go and ask other people for more details or investigate herself (both of these are empirical, observational tasks). Having, by hook or by crook, gotten what she needs, her next step will likely be to try and replicate the build failure and observe what happens: first and foremost, she will want to see if the error happens on her machine or only on a CI/CD runner. If she can reproduce the error, she'll then want to carefully read the entirety of the error message line by line to understand exactly what is failing. If she's seen the error message before, she can compare against previous cases where she's seen the error and use that to guess what might be going on here: if she hasn't, by contrast, she might go look it up on StackOverflow, or even in a book (an old-fashioned approach, but one that can be startlingly effective).
Up until this point, Elspeth's been working entirely in the empirical mode: she's been close-reading, running simple experiments and observing the results carefully, comparing what she sees against previous experience, asking questions of people and hitting the books. At this juncture, though, there may be a kernel of rationalist-mode work; she'll need to reason through what she needs to change to fix the problem, what the simplest and most efficient way of doing it is and whether there are any glaring obstacles to the solution. This is rationalist-mode work. Depending on the exact fix, she might also have to write a few unit tests to confirm that the changes she made are stable, and while getting good at writing those is largely an empirical process the actual act of deliberately enumerating things that the code change could do and expected outcomes is a rationalist-mode task. That's where the rationalist-mode work ends though: the next things Elspeth has to do is confirm by eye that the fix works and do some manual testing, both empirical tasks, and having confirmed that she'll have to tidy up and comment any code that she wrote and finally write some documentation and submit a pull request. Looking at the task as a whole, almost all of it was empirical-mode work. Moreover, the empirical-mode work is far more vital to the final success of the bug fix: if Elspeth is a bit mediocre at the rationalist-mode reasoning but excellent at empirical-mode work, she might write some slightly inefficient code, slow down the system a bit or write a test with poor coverage: none of these things are good, but the thing will still basically work. If she's bad at empirical reasoning, though... she might misread the ticket and do something completely different to what was asked for. She might find herself blocked for a full week because she can't write the error message. She might not fix the thing, and because she didn't check introduce a silent failure into the build process. Or she might completely lie about what's in her PR. All of these things are way more likely to cause catastrophic damage or serious waste than any of the rationalist-mode infelicities would.
Very little of this empirical-mode work, however, is advertised or visible to people who're just starting out in the field, and from what's taught at high schools and most universities and also from how we present ourselves on places like LinkedIn, HackerNews and suchlike, it's easy to get the idea that the rationalist-mode coding stuff is all there is to the industry. This creates a certain level of tension in the industry, and that in turn has some serious negative consequences.
The cultural contradiction and its consequences
This fact means that there's a contradiction at the heart of tech culture. Training and education in tech focus heavily on code, and indeed, despite actually writing how to code being a minority of what tech work is, we tend to elide the differences between the things. This means that tech, as an industry, is deeply, deeply rationalist-brained as a matter of culture despite the fact that the work we do largely relies on us working in an empirical mode: the people in power in industry tend to think in one way despite the fact that doing the work well requires us to think in entirely another. The way we resolve this discomfort is by building a status hierarchy that prioritises rationalist-mode thinking at the expense of the empirical mode. The high-status jobs in the tech field tend to be ones that allow for almost entirely rationalist-mode work: high-level application engineers, people whose sole job is "writing code", leaders who "take a strategic view" or "see the big picture" (it's ironic that despite the fact that these groups tend to hate each other, they actually think very similarly and neither are very responsive to the environment they're in). There's consistent pressure to ship, to deliver, to build and not to stay still, and people without that temperament tend not to last long in the industry, despite the fact that any serious empirical-mode thinking requires you, as often as not, to stop and observe things without judgement for quite extended periods of time. Work where empirical-mode thinking is unavoidable tends to be marginalised and made to seem less important; design and UX, data work and infrastructure/DevOps are all places where this tends to happen, despite the fact that they make up the majority of the work it takes to actually get a usable application in front of someone. All in all, thinking and working in an explicitly empirical mode will quite quickly get you marked out as "not a tech person".
So, why's this a problem? The tech industry has, after all, become wildly successful and changed the world in hundreds of ways both big and small, all evidently without being able to effectively look into these blind spots. The glib answer is that both inside and outside of our industry, this strong rationalist bias makes it very difficult for us to actually internalise empirical information, and consequently the tech industry makes stupid decisions at a rate almost unprecedented in a modern economy.
The tech industry has extreme difficulty integrating information that doesn't have its source in an overtly rationalist process. In practice, this means that we tend to think that if you can't give a logical chain of deductions that proves that something is the case, your information is worthless. The issue with this is that day-to-day, in the tech world and outside of it, the vast bulk of the information we use to make decisions isn't this kind of information. Take a small-scale example: most of what we know about different database types is empirical knowledge, not rationalist-mode knowledge. When we're choosing a database for our new project, then, the knowledge we have that PostgreSQL has better transactional guarantees than Mongo and that it would be unwise to use Redis for long-terms storage is all knowledge that we have in the empirical mode. If we devalue that knowledge, then, we lose the ability to make a sensible choice, and what's worse, because rationalist-mode thinking is great at creating post-facto explanations for something was decided irrationally, we can make fundamentally silly ideas sound much more like the kind of good, admissible idea that tech likes than an unpopular, irrational empirical idea that only has the fringe benefit of actually being correct. I'm sure you can think of your own examples, but the whole NoSQL craze is, I would argue, a good case and probably not one that anyone will argue these days. In a similar way, we make it remarkably easy to lose institutional knowledge: while the line of argument that goes "we've tried this a few times and it never ends well" with regards to technical decisions can lead to a somewhat sclerotic and slow-moving company culture and needs to be counterbalanced, holding that line of argument to be invalid tout cort could be expected to create precisely the pattern of "Not invented here" syndrome and repeated work that you see in tech, where ideas like the metaverse are created over and over again and fail over and over again because the industry is constitutionally incapable of learning from experience.
This inability to reason inductively from empirical information also means that we really struggle to make reasonable conclusions about the paths technology is likely to take from historical data (another generally empirical discipline, and another that we tend to devalue). The current LLM moment is a good example of this: we know, from history and economics, an awful lot about what an economic bubble looks like going back as early as the Roman Empire and with Tulip Speculation and the South Seas Bubble being two extremely well-known examples. More recently, the Dotcom Bubble and the ongoing cryptocurrency mess suggest strongly that if the tech industry thinks for some reason that it's immune to this kind of massive economic distortion, it's wrong. The current LLM situation has all the markers of being an economic bubble. Tech, however, thanks to the fact that empirical information is icky and doesn't quite count in tech circles, has difficulty not only identifying this fact but even looking at the situation from the outside: we're stuck in our sphere arguing about the technical capabilities of the models (in remarkably vague terms, as pinning down what they actually do would also require empirical-mode reasoning) and unable to quite grasp what the technology looks like from the outside or why people outside the tech sphere might be weird about it.
To expand on that last point, the highly rationalist mode that tech works in makes it very difficult to manage theory of mind for other people, which largely think much more empirically than the tech world does. We see this a lot in the gaming industry: while I'm sure that a lot of things like microtransactions are just money-grubbing, I suspect that there's a certain amount of this kind of rationalist bias involved. After all, "every time someone else has tried this it was a massive disaster that left them universally hated" or "live-service games are very difficult to get right and massive reputational risks" aren't, in the rationalist mode, valid arguments, so a lot of the gaming industry simply can't integrate the main things that would invalidate these ideas into how they actually think. This means that repeating the same stupid decisions over and over again is very easy to do, and importantly it can be done without ever having to actually reflect on mistakes. LLM companies do this to a similar extent: being unable to look at their industry from the outside, they're largely blind to how disliked they are in the wider population, how useless the tools seem to most people and how they're very quickly burning up whatever goodwill they had available. It seems, in general, that the rationalist bias in the industry is quite consistently going to lead to messy, expensive disasters.
That said, for all that we have lots of stupid ideas in tech, I don't think this is the worst that it does. The worst damage that the rationalist bias does is, in fact, in the wider human sphere.
A fundamentally ugly kind of personality
The fact of the matter is that this rejection of empiricism seems to lead to the personalities who succeed or adapt themselves to tech successfully becoming fundamentally ugly. Tech, as a community and an industry tends to skew fundamentally reactionary and inhuman, and speaking in my capacity as a humanist, it makes tech spaces fundamentally unwelcoming to people like me. For example, we have, in the twentieth century, learned some rather hard and painful lessons about why biological essentialism is simply wrong: biology has demonstrated that it's comprehensively false on basically any level you look at it on, and history has amply demonstrated that it's mostrously immoral and that the bloodstains will take an eternity to wash out. And yet much of the tech industry doesn't seem to have gotten the memo. James Damore and a distressingly large number of other men in the industry seem to think that women just can't write code, despite this just being flat-out false. Eugenics is disturbingly popular in the tech industry, and every year or so I seem to find myself having to explain to yet another person that literally every time it's been tried it inevitably leads to the kinds of atrocity that I'm willing to suspend my opposition to the death penalty for (to be clear, I am almost universally against it, but I find it hard to argue that the likes of Julius Streicher didn't have to hang). And of course we have the phenomenon where a startlingly large number of prominent figures in the tech world are enthusiastic about fascism, despite the fact that a fairly cursory reading of history can see that supporting these regimes leads, after much pain and bloodshed on the parts of people who didn't deserve it, to the gallows or a life spent in a prison cell. So much of the tech world seems, to a first order of approximation, completely unable to perceive these very hard empirical facts that we nonetheless can't prove deductively.
The fact is that any serious critique of fascism, sexism, racism, transphobia and other bigotries is fundamentally empirical: when we observe carefully, we can see that they obviously aren't true. Eliminating this as a valid line of reasoning means removing basically every tool against bigotry that we have at our disposal, and opens the door to whatever forms of bigotry we want. Even when we don't support those forms of bigotry, it's basically impossible to eliminate them, because when someone like me says, for example, "we've debated this over and over, repeatedly proved it wrong, and every time this has been tried it's a) lead to atrocities and b) lead to the institution trying it being crushed by less bigoted ones", I am being irrational and not allowing people to discuss heterodox ideas. And so we find ourselves having to repeatedly discuss fascism, eugenics and any list of other horrific ideas as though they're fundamentally legitimate and in an environment where any serious criticism of them is held to be invalid a priori because it relies on the wrong kinds of reasoning. No wonder that large parts of the industry have basically fallen to fascism, and that explicit antifascism will win you few friends even in the places where the industry isn't overtly Nazi.
A similar phenomenon happens with workers rights', art and AI (three areas that are, these days, heavily interrelated). Arguments such as "LLM art is deeply dreary and says nothing of interest", "these models were trained on the massive theft of work from others and are thus immoral", "this technology is being used as an excuse to gut the labour market and immiserate workers" are all functioning in the empirical mode: people are saying that this is happening and that they dislike it. Suffering from massive rationalist poisoning, however, the decision-makers of the tech industry don't understand how or why these arguments are valid, which makes it impossible for them to empathise with the people making the complaint and either change their behaviour or at least get better PR: rather, they'll stand up in front of an audience of angry PC gamers and say something like "Do you not have phones?" when announcing that their next release of Diablo will be mobile-only. They don't see why their suggestions that LLMs will replace all art and writing and lots of workers is offensive to people and will make them angry and disgusted, and they cannot for the life of them see why the idea of getting an AI to make up a bedtime story for their children is not forward-thinking and innovative but grossly offensive to the vast bulk of parents. The insistence on airtight chains of reasoning has cooked their fucking brains that much.
But of course we have to note that I said "ugly" at the start of this, not just "Nazi" or "sociopathic". The issue is that at some point the only reasonable response to all of this becomes "fuck you, you're awful, I want nothing to do with you". But this is in itself not seen as a valid line of argument in the tech world because it is in itself an empirical statement (in this case about one's own internal emotional state and outlook). The idea that certain behaviours and patterns might, if persistent, make other people not want to have much to do with you is one that is deeply alien to large parts of the tech world, and one can easily reason from there that anybody pointing out that someone's behaviour is absolutely fucking godawful is themselves being irrational and should be excluded from the group. The industry thus becomes a place that includes some of the most awful people you know in positions of power and one that is more or less incapable of self-regulating.
It's important to stress that most places outside of say, DOGE, don't go all the way there: they're socialised well enough that people don't have large-scale blow-outs like that. But the pattern colours enough tech spaces to a sufficient degree that it makes tech places uncomfortable, not only for women, people of colour and other minorities, but for anyone who tends to think empirically, or in fact, think at all. If you're the kind of person who appreciates art or music, likes to read or maybe wants to talk about emotions: the kind of person who, in general, enjoys engaging with empiricism-critical fields, tech can feel anywhere between a bit sad and flat and outright hostile. And in the worst case, of course, well... gestures around. I think that the fact that the tech industry was welcoming enough to these forms of thinking that it allowed overt Nazis to take over most of our infrastructure is a bit of a problem, and we should probably work out how to stop it happening again in future. We'd all be a lot happier.
Learning to observe
So, how do we change this? By this point, this manner of thinking and the associated biases have become so ingrained in the industry that shifting it for the current generation is going to be very difficult. The leaders in the field are firmly entrenched in that way of thinking, and barring massive state intervention to remove them and replace them with healthier leaders (which I'm not against, to be fair: I think Musk, Bezos and Thiel and the rest of that crowd should be at least removed from leading their companies and replaced by state appointees who might reorganise the companies to be less... like that), their presence is going to shape the industry culture for years to come. Besides, by the time a tech worker's been in the industry for a few years, shifting the way they approach the world becomes increasingly difficult. While it's a slow path to fixing the problem, I think the best way we have of shifting the industry's approach is educational.
Now, personally, I think it'd be great if we got everyone to do a rigorous liberal arts program before they even touched a compiler professionally, but I reluctantly have to admit that I don't think anybody's going to go for that. We could, however, rework existing computer science programs considerably. Currently the bulk of people studying "tech" at university don't study anything else: it's a straight shot of nothing but computers, with maybe a couple of general education papers on the side. If people choose to study at a boot camp or somewhere that isn't a university, the problem's even worse. While of course I can't exactly prove this, it certainly seems from observational evidence that there's grounds to believe that doing computer science or coding education in this way is deeply detrimental to the people who undergo that education and to wider society: it completely fails to prepare people to be effective workers or citizens, and as a lot of the technological skills that it supposedly teaches are things that you can learn on the job or by yourself, it doesn't even really teach them to write code well.
Here then is what I'd suggest; we eliminate coding and computer science as an undergraduate discipline. Much of the theoretical content of computer science that isn't "writing code" belongs more comfortably in discrete mathematics in any case: it's how I learned it and it seems to have done me some good. If people have a real, abiding interest in that branch of mathematics, they can do a mathematics degree: otherwise we can direct people wanting a career in tech towards the natural sciences, economics, statistics or even some humanities disciplines. We can then treat the specific coding knowledge (at least, those parts of it that we don't cover in other fields, a lot of which use code for domain-specific purposes) as a kind of honours year: if you want a professional qualification in tech shit, you can do an extra years' study at the end of your degree to learn the professional skills and get an extra certificate at the end of things.
This would have a number of advantages over the current system. First off, people wishing to get into tech would have to learn a domain in depth and gain exposure to things that aren't just "writing code": this might hopefully lessen the load of truly terrible opinions that have never been subjected to any scrutiny or challenged by observation that we have to deal with in tech. People would become better-rounded, less susceptible to passing fads and hopefully more engaged citizens, and the more intensive process might help make sure that fewer people who are only in it for the money wind up in the profession. Professionalising the training would also have the advantage of increasing the perceived rigour and seriousness of the industry: while there is a lot to like about the current "anyone with a bit of training can get started" ethos, it is also responsible for a lot of dangerously bad code in places where dangerously bad code shouldn't be. Finally, doing something else, especially in the suggested fields, makes it significantly more likely that people who end up in the industry will have the empirical and observational skills that we so badly need more of.
Above all else, it's our inability to stop and observe before rushing in and doing things that is responsible for our industry's problems: both the internal ones and the ones that we create for other people. I'm fairly confident that we'd be both happier and contribute more effectively to the world if we observed the world more and disrupted it less, and while I don't know to what degree we can make that happen writ large, it's certainly something that personally informs how I wish to approach the world and the industry as a technical professional.