Settings

Theme

New Kavli Center at UC Berkeley to foster ethics, engagement in science

news.berkeley.edu

20 points by orange3xchicken 4 years ago · 10 comments

Reader

toyg 4 years ago

Stuart Russell is giving this year's Reith Lectures from the BBC (one of the reasons I enjoy paying tv tax in this country). The first one was on AI in warfare, and it was interesting even just to hear "the other side", i.e. the military perspective on where they see AI actually being applied today and tomorrow. https://www.bbc.co.uk/programmes/m001216k

orange3xchickenOP 4 years ago

Recommend Russel's Human Compatible. Three principles to guide AI development:

1. The machine's only objective is to maximize the realization of human preferences.

2. The machine is initially uncertain about what those preferences are but then is.

3. The ultimate source of information about human preferences is human behavior.

  • chaosite 4 years ago

    > The ultimate source of information about human preferences is human behavior.

    Say what now? Humans like to smoke, eat too much sugar, and behave in various ways that are against their own self-professed preferences...

    • SuoDuanDao 4 years ago

      Yeah, this seems like a justification of the 'stupid consumer' advertising-driven model of human behaviour. Human clicks on ragebait, so it must like ragebait. Until it starts avoiding platforms that send it ragebait, then the human is clearly depressed. Because why else would it avoid it's source of ragebait, which is clearly the only thing it cares about?

      I've found in general that as platforms like Youtube and Facebook got more optimised for immediate feedback that's supposedly all about my preferences, they became less pleasant overall user experiences. Is it too much to ask for an AI that at least tries to help humans move towards self-actualization? I'm not saying I'd expect it to work out of the box, but some evidence our long-term interests are actually aligned would be nice.

    • gadflyinyoureye 4 years ago

      That’s what people like. If you make the AI know that those things are bad, you’ll be acting as big brother. Do we want the government or mega corp setting preferences?

      For example, the American Heart Association promotes a high carb diet to those with heart issues. Sadly, evidence based medicine shows such a diet is bad. If the AI did as the AHA says and promoted a high carb diet, we’d have more heart issues in the country.

totalZero 4 years ago

Technologists' cliché lack of social skills is a major ethical hazard in the world of technological innovation. We have engineers designing systems to interact with humans, but those engineers don't understand the relevant interpersonal forces and dynamics. If you want more ethical systems, raise conscious and sociable children who become engineers.

throwaway81523 4 years ago

Ethics, a powerful negotiating tool. https://www.gocomics.com/doonesbury/1986/08/10

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection