Technology’s Impact on Morality
cacm.acm.orgThis seems like a bad idea, mostly. I think there are ethical frameworks in which technologies should operate - just as a newspaper should not put out a full page advert calling for violence against a person or group - but "morality as a feature" sounds obviously dystopian.
Once again someone is calling for power to be moved away from people and centralised. What's labelled "morality as a feature" could just be "de facto censorship".
To put it another way: causing violence in Myanmar is the worst case scenario for freedom of speech, and even then is only a worst cause because of the violence itself; the speech alone would not make it a worst case. Why does the article not consider the worst case (or even, the likely case) of having speech governed by a central body?
Were you reading another article? The author of this one is not "calling for power to be moved away from people and centralised". Nothing like that argument at all appears in the article.
They are you're just ignoring it. 'Morality' is always presented in a binary context of morale/immoral and many of the examples in the article fall into the same pattern. You don't need to advocate centralizing power when discussing moral frameworks because it's implied in the conversation that hypothetically power will be centralized around the 'moral' viewpoint. For the same reason all morals are inherently supremecist because if they weren't they wouldn't be labeled the moral course of action.
The tl;dr is that a system that enforces moral communication would involve centralisation of what is considered moral.
Given a technology is necessarily the expression of a power differential on its subject, whether it's a stick being used as a lever or a GANN, it puts the pilot of it in a position where they must have an abstract understanding of consequences, and then actively articulate choices about how to apply it - yes, all tech is a forcing function for moral questions. Particularly, "should I?" and, "why?"
We already consult tech for decisions with moral consequences when we google it. We also test ideas using tech by seeing whether they align with popular narratives by posting them on social media. I'd even say most educated people under the age of 40 are already entirely dependent on social media for moral guidance and approval. "Do it for the 'gram," summarizes it well. However, TV was the same way. When most of the reference relationships you have a mediated by tech, your reference point is going to be an artifact of that tech. Then as now, the medium is the message.
The other piece is Clarke's addage about sufficiently advanced tech being indistinguishable from magic becomes newly interesting when you see the effect of powerful tech causing people to optimize their lives and moral choices to curry favour with it - in effect, worshiping the magic. In this context, social media may just be another form of primitive magical worship that subordinates the human spirit to bargaining with flighty and mercurial gods, with superstitions about its workings in the place of a coherent ethical framework.
I think this is what Gaiman's "American Gods" was about.
I'm hoping someone who's actually studied philosophy can weigh in on this:
I notice that almost all debates on these topics skip over the question of what moral framework will underpin the discussion.
It seems to me that a lot rides on that, for several reasons:
- There's a real chance that the participants don't actually agree on the unstated framework, and
- It makes it hard to actually argue from shared premises to a compelling conclusion.
So my impression is that most of these discussions end up with people arguing past each other, with little to show for it.
Am I missing something? I can't believe this is purely a modern phenomenon.
> Am I missing something?
No. A lot of people that write about these subjects assume that everyone that reads what they say has the same moral framework as them. It is implicit, as you say. Not defining that framework explicitly means you won't get anywhere with a discussion, which is ironic because if you try to do so in a discussion folks will tell you that won't get you anywhere :)
I had the same reservations about this as well and ended up stumbling on this article which helped clarify my thoughts on it a bit: https://plato.stanford.edu/entries/morality-definition/
Whilst not really settling the debate about which moral framework is used it does give a good lens by which to judge it, from the article:
There does not seem to be much reason to think that a single definition of morality will be applicable to all moral discussions. One reason for this is that “morality” seems to be used in two distinct broad senses: a descriptive sense and a normative sense. More particularly, the term “morality” can be used either
1.descriptively to refer to certain codes of conduct put forward by a society or a group (such as a religion), or accepted by an individual for her own behavior, or 2.normatively to refer to a code of conduct that, given specified conditions, would be put forward by all rational people.
These contradictory discussions have always been out in the open, as otherwise the course of action for society would be clear and just need execution.
With people speaking past one another, I don't think it's been as dysfunctional as it is right now. It obviously correlates with widespread internet access, but the addresable cause is probably more complex. One thing that happened before though was that future generations that hadn't yet picked a side could be swayed.
Mostly I think people underestimate how good at "bullshiting" we've become on average compared to previous generations (since that's how you win an internet argument) and how that need to be called out harshly -- we can't build anything on bullshiting. We're also not indoctrinating our young to have a deeper sense of responsability for others and society in general, and while that's optimal for the individual, collectively we all lose.
> ...we can't build anything on bullshiting.
I completely agree. However, I feel many of the problems we face as a society is because far too much has already been built on it. It was once strategically placed where necessary to maintain operations. Now it's just an industry all to itself.
You cannot teach children via hypocrisy. At best you teach them "it is only okay when adults do it". Their job is to observe and they will detect such insincerity.
Really indoctrination is the wrong way to think about it period. Children are not to be your puppets but your successors who will always diverge from you.
I am also of the opinion that 99% of the time "the collective" is used merely an excuse to manufacture consent by claiming to speak for all or represent their interests. Communists are especially infamous for this "listen to the working class but only if they agree with me" sleight of hand! I find doing away with the "collective" bullcrap and think in terms of individuals and "mirrored standards and impacts". You don't want "bad people" detained indefinitely without a trial because then there is nothing stopping it from applying to you. It is damn simple but people tend to fail that mirror test all the time.
Like the original comment, I feel like we're talking past one another. Of course I think we should strive to be what we teach our children.
Maybe you didn't like my use of "indoctrination", as you see it having a negative connotation. I meant to say that some thing need to be taught by repeated explanation and not by example alone -- mostly higher order concepts.
I agree, and I particularly wish that people would use "ethics" instead of "morals" more frequently.
My understanding is that ethics are how we treat other beings, and morals are rules we follow (think "moralizing").
I'm much more interested in the former.
If I'm not mistaken (based on a bunch of philosophy courses I took at Rutgers University a bit over a decade ago), philosophers use "morality" and "ethics" interchangeably - and both terms are, roughly speaking, systems that say what individuals ought to do in situations and in general (and these could be rule based, aka "deontology", or act based aka "act utilitarianism", or virtue based "virtue ethics", etc).
I'm unsure how regular people use "ethics" and "morals", so in any conversation I tend to clarify what I mean by the terms so as to avoid confusion.
My recommendation: utilitarianism is the best ethical framework for life - in every regard. It has a proven track record (advocating for abolition of slavery, women's rights, gay right, animal rights, etc -- all decades or even centuries before these became mainstream).
I studied some philosophy as an undergraduate. My understanding is this:
Ethical theory tries to answer the question, 'what should we do?' (e.g. death penalty or not?)
Moral theory tries to answer the question, 'why should we do that' (e.g. 'because god says so')
Metaethical theory tries to answer the question, 'what is the 'theory of knowledge' behind that moral theory?' (e.g. 'are there moral facts at all?')
In my opinion, technology does not affect core morality, all it does is amplify the positive and negative.
The invention of the knife, of the bow & arrow, of gunpowder, of rockets. Each one of these pieces of technology amplified the ways humans can harm and kill each other, as well as created other technological use cases. Their creation did not create changes in morality.
Social media is a tool, no different from a bow & arrow. It can be used for many use cases. Some of them have negative effects. That does not make it moral or immoral.
Technology does not create nor affect morality, it is how the technology is used by individual users that does that.
Chemical weapons are bad. Covid vaccines are good. These things have some of the more drastic consequences. Yes, there are good and bad things. That doesn't mean the two sides are always in balance (or it wouldn't make sense to develop any technology).
I recently had a discussion on AI and morality with a philosophy PhD candidate (who publishes on ethics and human rights. His publications do not concern AI though). Specifically, we discussed whether it was OK to allow self-driving cars, despite not having a solution to moral questions such as "given the choice, should you run over 2.3 grandmas aged 71.2 or 1.7 kids aged 11.3?", and whether it was realistic that socially established "correct solutions" to such problems could be incorporated into AI.
His opinion was that such "deep ethical problems" have been around for millenia and it's unreasonable to expect anyone to "just solve" them. Therefore, self-driving cars will not have solutions to these fundamental issues and, as a consequence, society should not and probably will not accept self-driving cars.
I agree that we will not "just solve" such questions (i.e., arrive on a consensus across humanity) any time soon. However, I also think such questions are almost irrelevant, because the "conundrums" ethical philosophy discusses don't happen in practice. There is no need to "solve" these problems in order to use self-driving cars. We can (and will) slowly progress towards a consensus-ish on what we want (or, at least, can tolerate) the "moral choices" of self-driving to be in almost all situations that arise in practice. In fact, AI can be a great step forward in "practical morality", because an AI will actually do what it "considers" morally right.
Of course, there will be many difficult questions to answer. However, I think it's a fundamental error to just give up and take the position of my philosopher friend. Moral qualms have not stopped technology in the past and I find it unplausible that societey will somehow "not accept" it. As a philosopher, or even just a member of society, you have to see AI as a chance and an obligation to advance morality. It's pretty clear that human morality is changing (I believe advancing) over the millenia. AI marks a transition where the moral questions of the past begin to make a difference in the real world because what we set as moral standards has a much larger effect on what people and things do.
To make progress on this, we have to accept that it is a fools errand to try "deriving" correct morality from "first principles" (Kant famously derived from absolute and eternal first principles that it's morally OK to kill "illegitimate" new-borns as a means of birth control). Rather, it's an exercise in consensus building. Likewise, it is not reasonable to expect moral solutions to arrive at something "perfect and complete". Practically relevant morality will be fuzzy and everchanging, just like judicial systems.
I am quite sad that so many philosophers and members of the public seem reluctant to accept this challenge at overhauling the millenia old stagnated academic debates. If they don't participate, engineers will "solve" these problems themselves, perhaps choosing ease of implementation over moral considerations.
> I am quite sad that so many philosophers and members of the public seem reluctant to accept this challenge at overhauling the millenia old stagnated academic debates.
Many philosophers and public members are sad that you insist on ignoring their input and are going to charge ahead long term consequences be damned. (I’m not taking sides here)
> If they don't participate, engineers will "solve" these problems themselves, perhaps choosing ease of implementation over moral considerations.
It’s OK. Congresses, parliaments, and other policy making bodies, basing their decisions on populist emotional feedback loops, will regulate these solutions in ways that leave both the moralist and the solver confused and unhappy.
> Many philosophers and public members are sad that you insist on ignoring their input
On the contrary, I strongly encourage them to give input, and I criticise those who would rather give up, dismiss the questions as impossible to solve, and lament how technology has been destroying society for the last 2000 years, while self-driving cars still start being used leading to suffering that could have been prevented by thinking about things more and seeing them from more points of view.
Which inputs by philosophers or the public are being ignored?
There are humans driving cars right now.
How do they solve the problem?
My guess is, they randomly chose who to run over in the heat of the moment.
Why isn't that a viable solution?
If AI drivers generally have less accidents and in the few cases left behave like humans, wouldn't that be a win?
Because codifying any behaviour is explicitly justifying any behaviour, and few engineers want to be responsible for signing off on the feature to run over Grandma.
A workaround thus far has been to abstract the problem into small enough pieces that ARE palatable to sign off on, as your comment shows. "Minimize the number of Grandmas run over" is a different framing than "Should we run over Grandma?".
It won't be random, it will be based on what they feel is best. It's immaterial in this case, because people will blame the AI as a whole when someone dies (even if a death was unavoidable). This is not an option with you the driver, because banning people from driving means nobody can drive a car at all. So the responsibility is shifted towards smaller details, so folks can feel safe in the knowledge that they may drive and they just have to do "nothing wrong". Even if that wrong is ill-defined and some situations have only "wrong" solutions.
Humans also directly suffer consequences for those actions. They show remorse and suffer emotionally attempting to grapple with the outcome of their choice. The legal system takes remorse and suffering into consideration, as it is designed to do.
Do self-driving engineers personally commit to be punished and suffer remorse for their algorithm’s choices? And before you say “it’s not fair, the CEO is at fault!” think about who’s writing the code. The CEO doesn’t make the self-driving car possible, the engineer does.
> If AI drivers generally have less accidents and in the few cases left behave like humans, wouldn't that be a win?
Yes. I think that's a big part of why it's not necessary to "completely, once and for all solve" ethical problems to automate things that might run into them. One could easily argue (and people of course have) that it's also immoral not to take measures that will reduce accidents, which I'm quite sure will happen with AI drivers in the not too distant future.
> Can technology affect human morality?
This is a bad question to start the article, because the average Joe needs only two seconds to answer it in the affirmative. A society with plenty of material wealth facilitated by technology (electronics, financials, corporate law, patents, accepted practices, etc.) will consider amoral to feed a dog anything other than dog-food, while that would be a non-issue in a country where people themselves are starving because they are stuck with the wrong "societal tech" (e.g. Cuba or North Korea).
> One prominent example of how technology can impact morality is Facebook.
Ah, Facebook, obviously the most important piece of technology we have invented in the last twenty years. No, they are really not.
Facebook is prominent, is tech and can impact morality. Exactly what the author is claiming. You are saying that it's "...obviously the most important piece of technology..." but the author does not make that claim. If you aren't familiar with a straw-man argument your last sentence is a textbook example.
Facebook, Twitter, Instagram think that they are the arbiters of morality.
The article addresses information technology specifically. My first thought was, to what degree is this question relevant to other technologies, like, e.g., paper?