Welcome to the Convivial Society, a newsletter exploring the intersection of technology, culture, and the moral life. In this installment I return to one of the earliest themes of my writing about technology: the myth of technological inevitability. When I’ve had occasion over the past several months to address the question of AI, the one point that I’ve felt compelled to make again and again, is that there is no inevitability. There are choices to be made, but it can be convenient to imagine otherwise. But, as Joseph Weizenbaum knew well, it takes courage to make them.
I began writing about technology and culture around 2010. It didn’t take long for me to recognize one of the most common tropes deployed by those whose business it was to promote new technologies. It was the trope of technological inevitability. By 2012, I wrote about how those who deployed this trope suffered from a Borg Complex. Alluding to the cybernetic alien race in the Star Trek universe, I defined a Borg Complex as a malady that afflicts “technologists, writers, and pundits who explicitly assert or implicitly assume that resistance to technology is futile.”
The first time I identified the tendency in this way, I argued that “the spirit of the Borg lives in writers and pundits who take it upon themselves to prod on all of those they deem to be deliberately slow on the technological uptake. These self-appointed evangelists of technological assimilation would have us all abandon any critique of technology and simply adapt to the demands of technological society.”
I then proceeded to outline a series of symptoms by which we might diagnose someone with a Borg complex:
Makes grandiose, but unsupported claims for technology
Uses the term Luddite a-historically and as a casual slur
Pays lip service to, but ultimately dismisses genuine concerns
Equates resistance or caution to reactionary nostalgia
Starkly and matter-of-factly frames the case for assimilation
Announces the bleak future for those who refuse to assimilate
Expresses contemptuous disregard for past cultural achievements
Refers to historical antecedents solely to dismiss present concerns
Throughout the middle-period of my late blog, The Frailest Thing, I would periodically post to the Borg Complex files some then-recent example of the rhetoric of technological inevitability. Before most of you found your way to this newsletter, I revisited some of these themes in a post from 2021, adding some new voices to my argument, including the informed perspective of Thomas Misa, a historian of technology at the University of Minnesota:
“… [W]e lack a full picture of the technological alternatives that once existed as well as knowledge and understanding of the decision-making processes that winnowed them down. We see only the results and assume, understandably but in error, that there was no other path to the present. Yet it is a truism that the victors write the history, in technology as in war, and the technological ‘paths not taken’ are often suppressed or ignored.”
And then there was Margaret Heffernan’s superb reflections on the theme. The goal of those who deploy the rhetoric of technological inevitability, she rightly insists, “isn’t participation, but submission.” “Anyone claiming to know the future,” she adds, “is just trying to own it.”
I don’t need to tell you that the rhetoric of technological inevitability has dominated discussions (or, more likely, directives and pronouncements) regarding AI that you’ve encountered over the past two or three years. In particular, AI-talk has manifested the distinct quasi-religious variety of the Borg Complex, which can be particularly pernicious since it understands resistance to be not only mistaken, but heretical and immoral.
In fact, it sometimes seems to me as if the adoption of AI is driven chiefly by the rhetoric of inevitability exacerbated by the related logics of the prisoner’s dilemma and an arms race. Indeed, it is a curious fact that some of the very people who are ostensibly convinced of the inevitability of AI nonetheless lack the confidence you would think accompanied such conviction and instead seem bent on exerting their power and wealth to make certain that AI is imposed on society. I’m calling this tendency, with a nod to Herman and Chomsky, manufactured inevitability.
It was a phrase that first came to me back in June when I read about how Ohio State was mandating the use of AI as part of its AI Fluency initiative. And I was prompted to write this up by news that Purdue was making “AI competency” a graduation requirement. It is hardly surprising that institutions of higher education, which stand to receive substantial funding from tech companies like Open AI and Google, would find ways to mandate the use of AI under the guise of preparing students for the workforce of the future (which often turns out to be a fool’s errand). But there are, of course, countless banal instances of AI being surreptitiously woven into the fabric of ordinary experience, from search engine results to software updates that introduce AI functions nobody asked for. There is no better way to reinforce the myth of technological inevitability than to stage the ubiquity of AI in such a way that it renders the adoption of AI a fait accompli.
I’d be glad for you to share any other instances of manufactured inevitability that you’ve observed.
I should acknowledge that while there is no inevitability, agency and responsibility are unequally distributed. Thus, it is worth noting that the strategy of manufacturing inevitability has the effect of obfuscating responsibility, especially on the part of those who in fact have the greatest agency over the shape of the techno-economic structures that order contemporary society for the rest of us.
The pioneering computer scientist, Joseph Weizenbaum, told us as much nearly 50 years ago in Computer Power and Human Reason: “The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But in fact there are actors.”
The myth of technological inevitability is a powerful tranquilizer of the conscience. It bears repeating.
More from Weizenbaum, who writes with refreshing conviction:
“But just as I have no license to dictate the actions of others, neither do the constructors of the world in which I must live have a right to unconditionally impose their visions on me. Scientists and technologists have, because of their power, an especially heavy responsibility, one that is not to be sloughed off behind a facade of slogans such as that of technological inevitability.”
But Weizenbaum understood one more thing of consequence: the necessity of courage. Allow me to quote him at length:
I recently heard an officer of a great university publicly defend an important policy decision he had made, one that many of the university’s students and faculty opposed on moral grounds, with the words: ‘We could have taken a moral stand, but what good would that have done?’ But the good of a moral act inheres in the act itself. That is why an act can itself ennoble or corrupt the person who performs it. The victory of instrumental reason in our time has brought about the virtual disappearance of this insight and thus perforce the delegitimation of the very idea of nobility.
I am aware, of course, that hardly anyone who reads these lines will feel himself addressed by them—so deep has the conviction that we are all governed by anonymous forces beyond our control penetrated into the shared consciousness of our time. And accompanying this conviction is a debasement of the idea of civil courage.
It is a widely held but a grievously mistaken belief that civil courage finds exercise only in the context of world-shaking events. To the contrary, its most arduous exercise is often in those small contexts in which the challenge is to overcome the fears induced by petty concerns over career, over our relationships to those who appear to have power over us, over whatever may disturb the tranquility of our mundane existence.
If this book is to be seen as advocating anything, then let it be a call to this simple kind of courage. And, because this book is, after all, about computers, let that call be heard mainly by teachers of computer science.
I’m not a computer scientist, but I do, in fact, feel myself addressed by Weizenbaum’s words. While the degree of agency we share over the shape of our world varies greatly, I remain convinced that we all have choices to make. But these choices are not without consequences or costs. And each one of us will find, from time to time, the need for courage, and it strikes me that such courage, call it civil courage or courage in the ordinary, is the antidote to what Arendt famously diagnosed as the banality of evil.