Whose risks? Whose benefits? | everything changes

6 min read Original article ↗

by Mandy Brown

I’VE WRITTEN BEFORE ABOUT RISK, and about reframing the risk of danger into the risk of harm. And I want to come back to this topic, and to one of the common orientations I have observed, both in myself and in others: when trying to make some change, we’re apt to notice and calculate all of the risks associated with making that change. We’re much less apt to notice all of the risks of not making that change—of persisting on the current path. It’s a kind of status quo fallacy, I think: an assumption that only deviating from the current route is risky, while staying put somehow is not. But every time I ask the question, “what’s the risk of not doing this thing?”—whether that question is put to myself or someone else—there’s a pause, an inhale, an “Oh,” and then another “Oh,” as the road ahead suddenly reveals itself to be partially in shadow. As the little bush around the bend abruptly seems like a good place for something monstrous to crouch and hide, as the cliff’s edge looks a lot closer than it seemed a moment before.

What I find helpful about this recentering—of considering both the risk of change and the risk of the status quo—is how it reorients us towards a more realistic view of risk. No path, no choice, is ever entirely free of risks; no road we could walk is always and forever perfectly safe and clear and certain. When deciding whether to take a fork in the road or continue on, the choice isn’t between taking a risk or playing it safe; it’s between these risks and those risks. It’s a choice of which risks not whether to risk anything at all.

I find that realization quite sobering.

But then, I can hear you saying, how do I decide between these risks? How do I count the costs of each potential risk in order to know which path is the safer one? I’m afraid I have no math here that will help: part of any risk calculation is a coefficient of uncertainty that gives rise to some positive margin of error, some room to maneuver in many directions. But I do have another angle to consider, and that’s to ask, who bears the risks? and who benefits?

I owe this framing to the late, great Ursula Franklin, and to a speech she gave in 1986, in which she said:

We cannot be part of a discussion on what risks a certain technology has without asking whose risks. It makes an awful lot of difference. Assume you are talking about video display terminals, for example; the great discussion is “Are they or are they not putting the operator’s health or eyes at risk?” You don’t discuss whether there are risks; you discuss whose risks. Who is it that is at risk? It’s quite pointless to talk about risk-benefit without saying “Are those who are at risk also getting the benefits, or are those who are getting the benefits very far removed from the risk?”…The questions to ask are “Whose benefits? Whose risks?” rather than “What benefits? What risks?”

Franklin is making a political statement here—too often, the risks of major technological change are borne by one set of people while the benefits accrue to another. This is true on a large scale—witness artists bearing the risk of their art becoming reproduced ad nauseam, while venture capitalists and their coterie reap the benefits of sky-high valuations. But it also tends to show up in smaller ways as well. A leader makes a bet that, if it turns out to be wrong, leads to layoffs on their team; they get the benefit of a win, but the team bears the cost of a loss. An engineer forgoes collaboration because they despair of the messy human work; when their code works, they get respect, but when it doesn’t, the whole project is delayed and the team takes the hit.

Locating those mismatches between whose risks and whose benefits opens up opportunities to correct them: you can ask questions like, how can the people bearing the risks also share in the benefits? Or, how can those poised to enjoy all the benefits be brought in to the risks? How might this effort be redesigned so that the risks and benefits accrue more equitably?

But there’s another thing I like about this framing, which is that it forces a kind of clarity and precision about the risks and benefits. Who, precisely, is at risk? And who stands to benefit? This is the territory of pre-mortems, of asking what could go wrong, in order to anticipate and attend to those challenges ahead of time. These questions jolt us out of a world of fuzzy risks and fantastical benefits and force real talk about why some change is being pursued, and what we think could plausibly come from it. “There’s a risk this business line will fail,” is a very different statement from, “There’s a risk we have to lay off this division.” Likewise, “The stock prices will go up” is not the same as saying “Every member of the team will see their compensation increase.”

Of course, that precision is at best asymptotic to reality. We can never be certain of what will happen as a result of our actions. The future remains, as always, undiscovered. But paradoxically by seeking that precision, we invite a level of skepticism that can be really instructive. Asking not only “what can go wrong?” but “who will be harmed if this doesn’t go according to plan?” and “who will benefit if it does?” is a way of screwing ourselves to the sticking place. It shifts a naked risk-benefit analysis away from systems and balance sheets and stock prices and towards the people (or other living things) who are the real recipients of that change.

Because at the end of the day, the ones who both bear and earn the consequences of every change are the living—people, and the ecosystems with which they are interdependent. Any risk-benefit analysis that elides that detail is, at best, lacking clarity, and at worst, actively dishonest. And maybe, sometimes, we’re on the receiving end of those analyses and powerless to do much about them. But often we’re the ones taking the measure before we cut, and we can choose to be clear-eyed when we do.