Progress Without Disruption - Christopher Butler

6 min read Original article ↗

Can there be progress without disruption?

It sometimes feels as if our culture has become addicted to doom — needing time to be marked by fearful anticipation rather than something more proactive or controlled. We’ve learned to expect that change must be chaotic, that innovation must be destructive, that the future must collide with the now whether we want it to or not.

But there’s nothing about progress that inherently requires disruption except our inability to cooperate for the sake of stability.

Consider the current conversation around AI and the future of work. Most people seem to agree there are three possible scenarios:

Scenario A: AI replaces nearly all functions provided by people so quickly that society can’t respond as it has to previous industrial revolutions. Mass unemployment destabilizes social structures supported by wage taxation. Even in a soft landing — universal basic income, increased corporate taxation — this is seen as catastrophic because it is contrary to the current capitalist paradigm and leaves humans with the existential problem of separating meaning and purpose from work.

Scenario B: AI replaces most current functions, but not as quickly. Sustained unemployment persists, but the gradual shift creates opportunities for humans to differentiate themselves from machines and derive value accordingly. Painful, but manageable. And afterward, this may even make possible a more deliberate and gentle passage to a new kind of society.

Scenario C: We recognize that AI’s current trajectory is destructive to the social fabric. We slow it down, change how it’s used, possibly reject aspects of it entirely. This would be the Amish approach — where observation and discussion about how a technology benefits the community determines its acceptance, use, and integration.

Most people assume Scenario C is impossible. We’re already too far down the path, they say. The technology exists, the investment has been made, the momentum is unstoppable. You can’t put the genie back in the bottle. Perhaps power and money are too committed now — unwilling and unable to accept regulation — untouchable by those that want something different.

But perhaps not. There are cultures that show us the way.

Despite common understanding, the Amish aren’t technophobes. They do use technology, just not everything that comes along. They carefully evaluate tools communally, based on whether they strengthen or weaken their social fabric. They observe. They choose. They have agency. A telephone might help, but only if placed in a shared building rather than individual homes, so it doesn’t fragment family time. The Amish demonstrate that discernment does not mean rejection.

It seems we’ve lost the ability to do the same. More accurately, though, I believe we’ve been convinced we’ve lost it.

We’ve internalized technological determinism so completely that choosing not to adopt something — or choosing to adopt it slowly, carefully, with conditions — feels like naive resistance to inevitable progress. But “inevitable” is doing a lot of work in that sentence. Inevitable for whom? Inevitable according to whom?

The conflation of progress with disruption serves specific interests. It benefits those who profit from rapid, uncontrolled deployment. “You can’t stop progress” is a very convenient argument when you’re the one profiting from the chaos, when your business model depends on moving fast and breaking things before anyone can evaluate whether those things should be broken.

Disruption benefits the information economy. It makes a good story when it happens, and a seductive — if not addictive — constant drip of doom when it feels as if it’s just around the corner. I’d love to live in a world in which good future narratives outsold apocalyptic ones, but I don’t. And so the medium creates the message, and the message creates the moment.

Disruption has become such a powerful memetic force that we’ve simply forgotten it’s optional. We’ve been taught that technological change must be chaotic, uncontrolled, and socially destructive — that anything less isn’t real innovation. But this framing is itself a choice, one that’s been made for us by people with specific incentives.

Think about what we’ve accepted as inevitable in the last twenty-five years: the fragmentation of attention, the erosion of privacy, the monetization of human connection, the replacement of public spaces with corporate platforms, the optimization of everything for engagement regardless of human cost. We were told these were the price of progress, that resistance was futile, that the technology was neutral and the outcomes were just the natural evolution of how humans interact.

But none of it was inevitable. All of it was chosen. Not by us, but for us.

The doom addiction makes sense in this context. If change is inevitable and we have no agency over it, then the most we can do is anticipate its arrival with a mixture of dread and fascination. Doom is exciting. Doom is dramatic. Doom absolves us of responsibility because if catastrophe is coming regardless of what we do, why bother trying to prevent it?

But stability? Cooperation? Careful evaluation of whether a technology actually serves us? These feel boring, impossible, naive. They require something we seem to have lost: the belief that we can collectively decide how technology integrates into our lives rather than simply accepting whatever technologists and investors choose to build.

I am not anti-technology. I have always been fascinated, excited, and motivated by new things. I am, however, choosey. This is about reclaiming the capacity to say “not like this” or “not yet” or “only under these conditions.” It’s about recognizing that the speed and manner of technological adoption is itself a choice, and one that should be made collectively rather than imposed by those who stand to profit.

What would it take to choose Scenario C? Not to reject AI entirely, but to evaluate it the way the Amish evaluate technology — with the community’s wellbeing as the primary criterion rather than efficiency or profit or inevitability.

It would require cooperation. It would require prioritizing stability over disruption. It would require believing that we have agency over how our world changes, that progress doesn’t have to be chaotic, that we can choose to integrate new capabilities slowly and carefully rather than accepting whatever pace Silicon Valley sets.

It would require rejecting the narrative that technological change is a force of nature rather than a series of choices made by people with specific interests.

Maybe we’ve actually lost the ability to cooperate at that scale. Maybe the forces pushing for rapid deployment are too powerful, too entrenched, too good at framing their interests as inevitable progress. Maybe Scenario C really is impossible.

But I suspect it’s less that we’ve lost the ability and more that we’ve forgotten we ever had it. We’ve been told for so long that we can’t choose, that resistance is futile, that disruption is the price of progress, that we’ve internalized it as truth.

The question isn’t whether we can have progress without disruption. The question is whether we can remember that we’re allowed to choose, and whether enough of us can do that at the same time.