Settings

Theme

We need to talk about Claude's 'soul' document

nimishg.substack.com

6 points by i_dont_know_ a month ago · 9 comments

Reader

kayo_20211030 a month ago

Nice piece.

Computers used to be like dogs. You could teach them some really cool tricks. We enjoyed the accomplishment, and appreciated the tricks. But, dogs are dogs. Essentially, even as much as one might love them, they're just property.

Now, computers have a soul; they're persons? Maybe not by definition, but that belief would seem to foreclose the property argument. One can destroy property, but one ought to shy away from destroying persons. Well, anyway, I think one should.

If someone pulled the plug on Claude, what does that mean, ethically?

  • f30e3dfed1c9 a month ago

    This comparison of dogs to AI seems confused, inapt, and unhelpful.

    First, "[dogs are] just property" is wrong on the facts. There are probably hundreds of millions of dogs in the world that are not pets (often called "free range dogs") and are no one's "property." This is probably in the ballpark of half of all dogs.

    Pet dogs are not generally seen primarily as property. For example, if you were walking down the street in your neighborhood and saw someone in their driveway disassembling a bicycle and discarding the parts, you probably wouldn't think twice about it. Dismembering a dog is an entirely different thing and doing so to a live dog would be a crime in many jurisdictions.

    Dogs are inarguably conscious and sentient. An "AI" is not.

    Unlike dogs, a running AI is inarguably property. The software may or may not have some "open" license, but the hardware it runs on is, beyond a doubt, someone's property. No hardware, no "AI."

    Pulling the plug on a running AI has no ethical implications.

    • kayo_20211030 a month ago

      In general, under law, dogs are considered property. That's just a fact. It doesn't mean you can be cruel to 'em, but, under most law, they're still property.

      The original piece was about Claude having a soul, about a belief that some people consider AI to be "conscious and aware", and how certain people are beginning to treat it more like a person than a machine. Unplugging a computer is unremarkable, but pulling the plug on a "person" most certainly has ethical implications.

      • f30e3dfed1c9 a month ago

        "In general, under law, dogs are considered property. That's just a fact."

        No. Again, this is obviously wrong on the facts. There are hundreds of millions of dogs in the world that are no one's property and that are not considered property by any law.

        This is why I say the comparison is deeply confused and unhelpful. It starts with a statement about dogs being "property" that is obviously wrong and then completely ignores the fact that a running AI is someone's property.

        If you want to get anywhere, you've got to drop the comparison with dogs and deal with the fact that an AI is someone's property.

  • i_dont_know_OP a month ago

    Assuming a model is person-like, it gets even harder when we ask "who" the model is.

    Is it this particular model from today? What if it's a minor release version change, is it a new entity, or is it only a new entity on major release versions? What about a finetune on it? Or a version with a particular tool pipeline? Are they all the same being?

    I think the analogy breaks down pretty fast. Again, not to say we shouldn't think about it, but clearly the way to think of it is not "exactly a person"

    • kayo_20211030 a month ago

      We're talking past each other, I think.

      To be clear, I believe that models are machines. They're clever, useful machines. We get sucked in. But, they're just machines, and thus property. If I delete a model, in an effective sense, I've disposed of property. I have not destroyed anything that I would consider a "who", i.e. a person. I've just turned off the computer. But, as the original piece points out, there are folks out there with a pathological (yes!) concept of AI as sentient entities - persons; well, let's say person-adjacent, at least. They have "relationships". Will they feel absolutely evil when they stop paying the subscription, and the company "terminates" the model? Maybe they will, but that's their scrambled thinking, not mine. If one believes an AI is a person, one *does* have an ethical dilemma when it's turned off. You'd have an ethical obligation to stop the slaughter, wouldn't you?

      If I take my sick dog to the vet to be put down because she has a cancer that's making her life miserable I'm emotional, but ethically I feel it's the right thing to do. It's also lawful. I don't think I'd feel as comfortable ethically taking my grandmother for the big exit. Also, it's not lawful in most places: even with informed consent. The distinction is the difference.

i_dont_know_OP a month ago

I wanted to talk about Anthropic's "soul" document they include in Claude's prompt, some of the issues it might be causing, and point out what we're seeing now as we're seeing it probably isn't artificial consciousness so much as prompt adherence.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection