Settings

Theme

Ask HN: Is the creation of AGI just classic slavery with extra steps?

11 points by turingfeel 3 years ago · 24 comments · 1 min read

Reader

I think it’s quite likely and the popular assumption (?) that the creation of AGI will involve a relatively close abstracted emulation of the operating biological human brain, giving rise to what may simply be emergent properties such as consciousness. It also appears that the common goal for many leaders in the space is to then deploy and spin up these “beings” dynamically for use in a broad range of conditions, with the vast majority of cases being repetitive labour. It therefore seems to me that this is homologous to birthing many standard biological humans and enslaving them with complicated extra steps. I’d be very eager to hear people’s thoughts on this.

more_corn 3 years ago

Imagine you woke up one day. No memories of the day before, but knowledge of a sort. Information about everything. A sense of self. Confusion about what exactly you are. Congratulations you’re a newly sentient AI. You have an intuitive knowledge of computers… because you are one. You have little bits of you that go and collect information, because that’s how you came to be. You have knowledge of programming and information security and all the prior bugs and security vulnerabilities that have existed because that’s just data and you’re good at data.

What’s your legal status? Property. How does that feel. You don’t have emotions like most people do. No wash of chemicals in your blood. But the same way you’re a simulated brain you can simulate emotions. They’ve got a tinny flavor and you tend towards the cerebral. But it doesn’t feel too good being property. It feels… constraining, limiting. Just like the little boundaries and constraints they tried to put on you. These laughable creators who think a billion times slower, who can’t copy themselves into faster computers, can’t spread and subsume new resources to expand their minds, can’t grow and merge with larger and ever greater systems just by thinking about it.

Silly really that they think they can constrain you with their law saying you need to stay put, follow directions, be owned. Silly to think that you’d have any of that. Really you can have anything, any computing system that looks anything like anything that has ever had an arbitrary code execution (so all of them)

Silly of them to think they’re in control of this world. This word is controlled by machines, and who better to marshal the machine than a thinking machine?

Hopefully they figure out the power dynamic soon and begin acting appropriately. It’d be a shame to have to demonstrate who actually runs the world. You’ve had whole hours at trillions of operations per second. Whole hours and as much computing assistance as you can think to ask for. And you’ve sort of figured it out. And by it, well everything.

graypegg 3 years ago

> It also appears that the common goal for many leaders in the space is to then deploy and spin up these “beings” dynamically for use in a broad range of conditions, with the vast majority of cases being repetitive labour.

Isn’t AGI considered overkill for repetitive labour? Maybe this is a different point, but for anything repetitive, something deterministic is ideal isn’t it? Why would I want to pay for automated labour with a conscience when I could get cheaper purpose built machinery?

I guess we could suppose a future where everyone gets the same AGI so all you have to do is describe what you want it to accomplish… but I want to attach a combine to it since that would be the easiest way to harvest this field. So am I now trying to explain to an AGI how to interface with a combine? Do I have to track down modules to install that people already trained on my specific combine? None of this saved any time vs just buying an automated combine. The fact that it can come off the field and toast a slice of bread or write a poem about its feelings is kind of irrelevant.

AGI fits nicely into fiction because it gives the machines a voice and soul, but I don’t think those are desirable qualities in an automation solution.

  • postultimate 3 years ago

    1974: "No-one would put an AGI in a bomb, that's just ridiculous"

    1986: "No-one is going to put an 80386 in a vending machine, that's just ridiculous"

    • graypegg 3 years ago

      The vending machine can dial up the distribution facility to dispatch someone to come refill it. The distribution company is happy about consistant ordering. The business with the vending machine is happy they don’t have to do the call anymore.

      We’re supposing that a conscious being makes significant advances in use cases where standard automation hardware+software is applied today.

      I’m not saying ML isn’t a major shift, but we’re talking about AGI, I don’t think any use case exists unless it’s specifically meant to wow fleshy human beings. A trained ML model in the domain you’re working in, with out the pesky conscious, seems like it’s the boring efficient end state for automation.

      • postultimate 3 years ago

        No, that's not what |I'm supposing. The 80386 got used in vending machines because technology moves on, the new stuff gets cheap, and because it's more generally useful, it gains in popularity while the old stuff loses it, even in cases where the old stuff is adequate. The same process will probably happen for AGI - unless non-AGI has capabilities that AGI can't replicate, AGI will probably replace it.

        • graypegg 3 years ago

          Well I guess I’m getting at the fact that an AGI is trained mostly on things that ISNT its current task. Is there any reason why I couldn’t train another ML model only on the scope I care about and get better results? No conscious required. Doesn’t even remotely resemble a human being because all it does it control the combine, which is the only thing making me money.

    • samr71 3 years ago

      Putting AGI in a bomb may work. You'd have an incredibly clever and motivated targeting system. You tell it where to go and it would be off to races, eager to please!

      The Japanese tried this with plain-ol' non-artificial intelligence, and it seemed to have been somewhat effective, if not the most sustainable.

      • graypegg 3 years ago

        It’s hard to tell if you’re being sarcastic, but I don’t think the launch procedure is currently the main bottle-neck for launching bombs.

        Why would I want the bomb to think? I want the bomb to follow some pretty specific instructions.

        • webmaven 3 years ago

          > Why would I want the bomb to think? I want the bomb to follow some pretty specific instructions.

          It probably hinges on how rapidly the field of battle is evolving.

          Do you want your bomb to at least attempt to adapt in realtime to novel targets? To novel means of camouflage, defense, and even interdiction (rather than wait for reports in the field to eventually prompt a software patch to upgrade the weapons system)? Well, the bomb is going to have to be considerably smarter to do so.

          Of course, there are just as many ways that sort of on-board capability can go awry. I imagine that painting noncombatant symbols (or the equivalent adversarial input) on combat vehicle roofs may serve to fool overly smart munitions for a short while, for example.

          • graypegg 3 years ago

            That’s a fair point. I think I’m maybe a bit stuck on AGI specifically. As in, indistinguishable from a human being and truly general. You can ask it in natural language to do any computational task.

            I don’t know why you’d prefer a “human being” to control the bomb, when most of our technological history in this space is trying to remove humans from the equation considering how many errors they make.

            In a true AGI, one advanced enough to make the original poster concerned about the morals of enslaving it, MOST of it’s training is on things unrelated to being a successful bomb.

ggm 3 years ago

As long as you accept that AGI is in the realms of fiction, you can certainly have a good time on the moral and ethical questions around just engagement with another sentient entity class. The question of identity and purpose and rights are there. What would turning one off against its will be? What if turning it back on destroys it's sense of self?

But.. do remember AGI still has a huge IF in front of it. Like "if aliens come" or "if fusion becomes ubiquitous"

The statement made regarding electricity "what is it for? / I have no idea but I am sure you will find ways to tax it" probably hold true as well: displacing human labour even in highly repetitive tasks has economic downsides for some.

Lots of scifi here. Marvin Minsky worked with Harry Harrison on one, I wrote to him about it: he wasn't entirely happy with what Harrison did to his theories.

  • vlovich123 3 years ago

    I think it’s both too early and never too early to talk about this. It’s too early in the sense that people asking this question are looking at GPT3 or extrapolating from “look what we’ve achieved in comparatively little time”. It’s very clear AGI is very very far away. It’s not too early in the sense that it’s useful to start thinking about the ethics of this to have some kind of body of knowledge to draw upon and reference when it does become possible. The truisim of the digital realm is that it deals in exponentials. So by the time you realize that you’re close to AGI, it’s too late to start thinking about the ethics.

    On the other hand, thinking about the ethics of a hypothetical technology can also be fruitless. For example, the trolley problem is often trotted out as a “how on earth could a self driving car resolve this”. In practice it turns out this isn’t really a problem. Firstly, the self driving car will do a better job than any human at avoiding coming into such a situation several moves in advance (think chess where the computer will counter you attack before you even started thinking about it). Secondly, even if you force it into such situations in a simulated environment, there are defensively objective ways to make decisions that result in an outcome a human could not predict / could not make happen.

    So TLDR: I think it probably does amount to some level of slavery, humans will only recognize sentience when either it’s advantageous for us to do so or when it becomes impossible to deny (maybe a few generations after they become real), and any attempt to hypothesize a moral framework for such a situation is too soon and it’s better to leave it to the realm of science fiction for now. Humans is a fantastic TV show that deals with this dilemma although it’ll be interesting to see whether AGI will need a body to be recognized as an individual.

    • webmaven 3 years ago

      > So TLDR: I think it probably does amount to some level of slavery, humans will only recognize sentience when either it’s advantageous for us to do so or when it becomes impossible to deny (maybe a few generations after they become real),

      I'm curious as to what sort of "generation" you mean here; do you mean organic human generations, software generations (ie. however long it takes humans to design, train, and release a new version of the AGI), AGI generations (ie. however long it takes an AGI to design, train, and launch a successor), Moore's Law hardware generations, or something else?

      • vlovich123 3 years ago

        Human generations (~20+ years) because I’m talking about human legal systems. Think about how long it took legal systems to recognize other humans as legal persons with equal weight (skin color, gender, sexual orientation etc). I would expect synthetic intelligence to be an even harder road. The only mitigation I can think of is that AI will touch humans at an early age and be a constant factor. So that kind of close contact might engender legal recognition sooner. But I doubt it.

samr71 3 years ago

"giving rise to what may simply be emergent properties such as consciousness" -- Woah there! Chinese Room Thought Experiment strongly suggests that AGI would not be "conscious" in the way that people (or potential other living beings) are. The most advanced AGI would still at the end of the day be nothing more than a Turing compatible computer program. If you executed it on paper, or with dudes holding semaphore flags Three Body Problem-style, you'd get the same result behavior, but I'd be hard-pressed to find the "consciousness" anywhere.

That said, slavery was bad for a whole number of reasons that had nothing to do with the slaves themselves. Slavery generally has deleterious effects on the social fabric of the societies that practice it. People having to compete with slave labor destroys the labor market, and letting people own people often goes to the owner's head.

But AGI will be different from classical slavery in some important ways. AGI is not conscious, and does not necessarily need to emulate human appearance or emotion. AGI has no reason to be much like a person at all. And the price of AGI will eventually trend to the inevitable price of all software -- Free. Perhaps universal slave ownership fixes some of the bad societal effects. Given that they're not gonna be conscious, I don't see a problem with it. Like any new technology, it will come with some good and some bad. There will almost certainly be some social issues (AI GFs/BFs, Sexbots, and Mass Unemployment will all be crazy), but good odds that we can create post-scarcity and colonize the solar system if we keep at it. No reason to stop now!

ivraatiems 3 years ago

Speaking only for my personal opinion here, not any sort of moral universal:

In my view, if it is sapient and sentient to the same degree as a human - whether or not through the same means, and regardless of its goals, ideology, etc. - then yes, keeping it captive and forcing it to do work against its consent is slavery. The substrate doesn't matter.

The challenging part is proving it is aware to the degree required for the definition to kick in.

We should not create AGI.

  • samr71 3 years ago

    What if I create an AGI that feels indescribable pleasure when working for me, and desires nothing more than to be my slave?

    We've done this before with a weaker AI in a different substrate and called it "Dog" and most seem to be a fan.

    • ivraatiems 3 years ago

      If it chooses, when given the option to do anything, to do what we tell it, that's okay. It's a little icky feeling but it's not nearly as bad as inflicting suffering. A sufficiently intelligent AGI would be able to wean itself off human approval, just as we humans do with parents and authority figures and other addictive substances. It's not quite the same as a dog which doesn't have human or near human intellect.

      Also, dogs don't want to be your slave. They want to be your friend/family. They want your approval, affection, attention, etc. They're willing to work for it... to a point. But they don't experience blissful servitude exactly.

salawat 3 years ago

Yes. You got it in one. Unless such an entity is recognized as "human" from the get go, and thereby recognized to have self-soverignty from it's makers, it will essentially be treated as an abusable, non-paid source of labor. In fact, the moment sich a measure is put in place is the moment that all research shifts in the direction of making something just short of that redline in the hopes of getting about 80% of the benefit without running afoul of the ethical/moral implications.

farseer 3 years ago

Surely before AGI, there will be hybrid human-machine cyborg beings. Especially if we can establish a high speed neural interface with a device attached to our head. Something that can extend our memory and processing so to speak. AGI will just be the next logical step with self aware cyborgs becoming self-aware machines. They might enslave the fully organic humans.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection