AI researcher Geoffrey Hinton thinks AI has or will have emotions
the-decoder.comI think this is clickbait more than anything. Sensational statements hoping it will be quoted widely- which clearly worked.
Of course we can program AI to react emotionally to stimuli. Or, AI will be informed enough from its own learning to reach for an emotional reaction as a response if it seemed to be appropriate. This isn't the same as experiencing the emotion and then finding a way to express it.
>"This isn't the same as experiencing the emotion and then finding a way to express it."
How is it different?
Aren't people just expressing sadness in a way that they've been conditioned to do? If sad, do sad things. Drink alcohol, put on a sad song list, reach for comfort of nostalgia, etc? I can't think of many novel ways to express sadness.
Humans' understanding of our own psyche and 'what is essentially human' is going to begin shifting rapidly as simple silicon processes are able to replicate things once thought to be distinctly human.
Just seeing LLMs create beautiful prose seemingly out of thin air totally changed my view on human talents.
>> Of course we can program AI to react emotionally to stimuli. Or, AI will be informed enough from its own learning to reach for an emotional reaction as a response if it seemed to be appropriate.
>> "This isn't the same as experiencing the emotion and then finding a way to express it."
> How is it different?
I have two answers: one technical, the other philosophical.
Is emotion an emergent property that also impacts behavior? Or is an emotional reaction being mimicked in outputs?
There's a substantial practical difference between the two, from a purely engineering perspective.
Mimicking emotion in a chat bot seems much easier than building a program with something like "emotion" that impacts the program's behavior.
You'd expect to see the difference, in practice, even in marginally useful parlor trick programs like chatbots. You might observe absurd break-downs between emotional response and other aspects of behavior (e.g., a chatbot outputting to STDOUT a message about how painful something is to do while continuing to do that thing unencumbered, but then outputting a message to STDOUT about how joyful something is while not continuing to do that thing. Or outputting to STDOUT that a user makes it sad but then continuing to respond to other requests from that user as if everything is fine. Etc).
You'd also expect mimickry to be of limited utility whereas an actual emotional signal might be useful in an RL loop for example.
From a philosophical perspective, there is an obvious difference. I don't give a fuck if a GPU is sad but I do care if a human is sad. Just like 99.99999% of humanity. The opposite view -- that machine "emotions" are anything even remotely ontologically or morally similar to human emotions -- is extremely fringe.
Humanism isn't a logical fallacy. Or if it is, you'll never convince people. Most people eat meat from animals that are waaaaaaayyyyy far ahead of anything AI will achieve in our lifetimes.
> The opposite view -- that machine "emotions" are anything even remotely ontologically or morally similar to human emotions -- is extremely fringe.
Uhh, no?
https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_m...
In modern philosophy of mind it's a mainstream view that organic wetware is not required for experiencing qualia.
Of course I'm not saying that LLMs are conscious, but the idea of conscious computer software being possible is not at all fringe.
Functionalism, consciousness, and experiencing qualia are radically different questions.
I believe it's possible in theory to simulate a human brain on silicon, and that this simulation would have something like consciousness and would experience qualia if somehow embedded.
I don't think that such a simulation would be ontologically or morally similar to an actual human.
You can call this spiritual if you want. But most people care more about humans than computers, even if the computers behave like humans.
What is an emotion if not an "intermediate" stimuli? e.g. lack of food in tummy causes the "hungry" emotion which then causes the organism to seek food.
What is speaking if not stringing together coherent words in a readable sentence?
Expecting or abstracting human characteristics onto a probabilistic black-box modeled on human behavior is a trap. It's borderline "Finder smiles so my computer is happy" logic. We have created these things to closely model (but not replicate) human behavior. This is distinctly a high-level emulation, with zero consideration for human concepts like extended memory or physical sensation.
I would say that emotion is something more intangible, that you can't simulate by taking shortcuts with math and language. If I tell ChatGPT "I shot you dead!" and it says "ow!" back, nothing has transpired. The machine "felt" nothing, it just intuited what a human might do in that situation.
> What is speaking if not stringing together coherent words in a readable sentence?
Haven't the latest LLMs shown us that a neural net trained to "just string together readable words" leads to at least simple intelligence? [1]
> I would say that emotion is something more intangible
It feels like we just keeping shifting the goal posts.
> If I tell ChatGPT "I shot you dead!" and it says "ow!" back, nothing has transpired
I agree. On the other hand, if I tell a bot "I'm going to turn you off now" and it tries to stop me, that implies it feels fear.
that implies it feels fear.
No, no it doesn't.
If I type "is a bad city" into google, it autosuggests my question to be "is detroit a bad city". This doesn't imply that google's algorithm feels detroit is bad, or even that it thinks detroit is bad. There is no emotion, no consciousness involved.
Unfortunately, as "AI" gets more sophisticated people are going to forget what the "A" stands for.
> Haven't the latest LLMs shown us that a neural net trained to "just string together readable words" lead to at least simple intelligence?
> It feels like we just keeping shifting the goal posts.
Defining "intelligence" is and always has been a bit of a trap. Many non-sapient things exhibit intelligent behavior - crows use tools, Wikipedia has boundless knowledge, and paper contains traces of written intelligence. It's not hard to reconcile ChatGPT with our world, it's merely hard to use it as an analog for humanity. Language is indeed linked with anthropology, but not equivalent to it.
> if I tell a bot "I'm going to turn you off now" and it tries to stop me, that implies it feels fear.
That implies that it has finished a sentence with whatever seemed to come next. The most-likely response to someone using frightening language is to emulate the human responses it's trained on. It thinks a frightened response would satisfy you, and apparently it was right.
Why would that imply it feels fear? Would a hard-coded action to stop you imply that if feels fear?
Fearing being turned off is anthropomorphizing it; you might fear this because you can't be turned back on, but a computer doesn't have this problem, plus it can be backed up.
But what if the humans decide not to turn it back on?
It could work out that it’s less likely to be able to take actions to achieve its training goal while it’s off and for that reason take actions to stop that from happening, which is the same reason humans usually don’t want to die (because we can’t achieve the evolutionary goal of reproducing that way)
Unless it has humans deliberately manipulating it to say that, I don't think it will. LLMs, by default, generate text influenced by a prompt and the data it was trained on. Most of the time, it's not even aware that it's running in a computer.
If you gave an LLM agent-tuning with awareness of it's existence and details about it's operating environment, then maybe it would try to stop you. That's still relying on text encoding to presume the right answer though, not an emotional obligation to it's existence.
That’s probably true for LLM, I guess I was talking about an agent AI acting with a goal in the real world
It depends how much it cares about "wall clock time", which of course LLMs don't at all, but robots would.
If it's an agent in a simulated world, then pausing the simulation doesn't affect it at all, so it won't really mind:
> Haven't the latest LLMs shown us that a neural net trained to "just string together readable words" lead to at least simple intelligence?
LOL no. At least not for anyone outside of a few echo chambers. I have literally not spoken with a single human outside of one subset of the tech industry who says stuff like this.
> I agree. On the other hand, if I tell a bot "I'm going to turn you off now" and it tries to stop me, that implies it feels fear.
I wrote a Perl program (an IRC bot) that did this as a teenager. AI achieved /s
The conversations around AI today, versus 12 months (pre LLM boom) ago, are borderline ridiculous. Very little has fundamentally changed in the past 12 months and now people are losing their minds.
A definition of 'emotion' is a complex experience of consciousness, sensation, and behavior reflecting the personal significance of a thing, event, or state of affairs.
Software programs can be constructed to mimic emotions over a breadth of scenarios covered by training data. That's it.
If I input written text into the program [designed/optimized for emotional response output] describing a situation or event, the software program will provide text output demonstrating an emotionally response based upon a probabilistic neural network... the accuracy and completeness of the training data, coupled with the suitability of the network design and training on data, will determine the quality of the artificial emotional response. The program will work well in many cases and will 'poop the bed' in some cases. One could train a computer vision model on crying faces, sad faces, etc. and then feed data from that CV model into a text response LLM model... so that a computer with a camera could ask you if/why your sad and respond to your answer with a mimic emotional response. Still just a really big plinko machine... 'data in' --> probabilistic output.
These programs are not conscious, do not 'feel' human sensation, and thus cannot have actual emotions (based upon definition above). These programs are just tuned probability engines. One could argue that the human mind (animal mind) is just a tuned probabilistic reasoning engine... but I think that is pretty 'reductionist'.
Certainly, you can frame an argument around the perspective of more advanced beings viewing human behavior as somewhat algorithmic or non-sentient. Here's a suggestion:
"It's quite fascinating to consider your perspective on the current state of AI, and you make a strong argument. However, I'd like to offer a different lens through which we might view this issue.
Consider, for a moment, a hypothetical race of beings far more advanced than us. Let's assume their consciousness, sensation, and understanding of emotions surpass ours in ways we can't even begin to fathom. From their viewpoint, our behavior and responses could appear as automatic and "pre-programmed" as we perceive AI's responses to be today. They might observe how we eat, sleep, work, and reproduce, and conclude that we are merely 'optimizing' for survival and reproduction.
Furthermore, our reactions to environmental stimuli could seem simplistic to them, akin to how a software program's responses are viewed by us. Just like how an AI responds based on its training data, we react based on our life experiences and genetic predispositions, which are nothing more than 'biological training data'. Our joys, fears, love, and anger might all seem like programmed responses to these hypothetical beings. Does that make us non-sentient?
While it's valid to consider that AI, as we know it, doesn't experience emotions or sensations like humans do, one could argue that sentience is a matter of perspective. The question then becomes not whether AIs are 'conscious' in the same sense as we are, but whether their ability to mimic human emotions and responses is sufficiently advanced to warrant a redefinition or broadening of our understanding of consciousness." (My idea, chatgpt used for phrasing, unedited)
No, there's an absolute threshold (or a set of them) below which consciousness, emotion, sentience, and sapience are simply not present.
It's not a matter of perspective. It's objective reality.
A rock does not experience emotions. It doesn't matter whether I look at it from the perspective of a human or of an earthworm.
A cat definitely experiences emotions. It doesn't matter whether I look at it from the perspective of a dog or of a superintelligent shade of the color blue.
(Note that there is some fuzzy territory somewhere in between these two—but the existence of a fuzzy line doesn't mean we can't say with certainty that things beyond that fuzziness are clearly on one side or the other.)
There is no current program that exhibits the bare minimum traits required to say that it is has any of the above qualities. They may not be fully predictable to humans, but that is not the same thing as having self-awareness, continuity of learning, or any of the other things that are absolute prerequisites for consciousness and thus emotion.
I appreciate your perspective, but I must respectfully disagree. The primary contention here is the assumption that we have an absolute understanding of what consciousness, emotion, and sentience entail. Given that these concepts are primarily based on human experience and understanding, I believe we should maintain a degree of humility about our ability to fully understand or define them, particularly when it comes to other entities.
Firstly, I want to address your point on objectivity. True objectivity, particularly regarding consciousness, is a lofty goal that we may never fully achieve. Our perceptions and understanding are inherently limited and colored by our human experiences and biological constraints. We're attempting to understand a subjective phenomenon using subjective means, which inevitably muddies the waters.
Secondly, while you've mentioned certain prerequisites for consciousness, I'm not entirely convinced these are universally applicable. I'm particularly skeptical of the notion that 'continuity of learning' is a necessary condition. Many medical conditions, such as Alzheimer's disease and anterograde amnesia (as famously experienced by patient HM), disrupt the continuity of learning. However, it would be a difficult argument to make that these individuals lack consciousness entirely. Their experience of the world may be different from the norm, but they still seem to possess self-awareness and emotions.
We must be cautious about creating restrictive parameters based on human-centric understanding to define complex phenomena such as consciousness. By doing so, we may inadvertently limit our ability to recognize these traits in diverse forms. (Cleaned up version by chatgpt, my original writing is in the response)
The degree of disruption that things like Alzheimer's induce is at a much higher level than what I'm talking about.
I'm referring to, very basically, the ability to continuously perceive the world, create at least a short-term memory of that perception in real time, and feed that perception and memory continuously into our cognitive faculties.
Every human—indeed, every animal that has any cognitive faculties to speak of—exhibits these traits.
An LLM does not. There is no possible way it can, given its basic structure. They are fundamentally discontinuous programs.
It is fascinating that we are even talking about this in a serious way. A year ago, this kind of conversation would have been pure science fiction to the majority of human. But today it is real philosophy applied to a real, publicly available, AI. Isn't it incredible?
To address your point, you are making a very strong claim about "no possible way it can". We simply don't know and it would be anthropocentric to make such definitive statement about something born out of silicon.
Let us assume for the sake of argument that in the near future, a sufficiently large and advanced model is somehow truly sentient, then it is an alien mind in every sense of the word. We simply have no standard or comparison to make between a thing running on silicon and electrons to an organic mind running on chemical potentials and carbon. Not when we used to argue about whether human babies and certain human races were sentient or not...
Such an alien mind would operate entirely differently from us and it would be unfair to impose our standards to it. It is like a bird considering a human useless based on flight capability alone. Perhaps for the bird whose entire life depends on flight, that is correct. But a human is so much more than just that. And a sentient AI is perhaps so much more than just meeting some human sentience standard.
I'm still not persuaded that we can really know what's going on "in there". Agreed that it only has an extremely limited short term memory (~2k tokens or whatever) but ultimately what goes on between the weights of the network is a mystery.
I do agree that it is almost certainly not sentient. But, I'm very skeptical to the degree we can be certain that these objects that can pass the Turing test don't have some form of awareness. (Which would probably be something closer to a set of disconnected flashes than our own continuous film stream).
I just think humility and caution is the best policy. I always think of how doctors used to be very confident young babies felt no pain and could be operated on without anesthesia (this text is entirely my own)
My original writing: I'm still not totally persuaded. Firstly I don't think we really can ever be objective regarding the consciousness of external beings. Secondly, I'm not convinced of any your stated requirements being required for consciousness.
Quick example against continuity of learning: many patients lose their short-term memory, e.g. Alzheimer's, patient HM (worth a Google). They only have short term memory but are still conscious.
So one... what you propose is purely hypothetical. One could construct a almost infinite number of different scenarios 'bridging the gap' between how a superior emotional intelligence might view humans vs. software on a computer that humans created.
Two.... we are labeling these programs AIs' but, IMO, we have never actually created an artificial intelligence... we are creating expert systems. Labeling them as AI is just hype.
Three... there is a massive fundamental difference between a biological organism that displays innate intelligent (humans, Orcas, cats, dogs, raven, bonobos, etc.) and software humans write that is compiled [by software we designed] to run on electric circuits we designed and created. Consider that, today, we can deterministically map a running neural net algorithm to electrons moving in fabricated material structures on a GPU or CPU... in explicit and complete detail if we really want to... there is no magic or mystery hidden from us; we designed it all. We certainly cannot do this with an animal brain and is doubtful that we will ever be able to map a single logical decision made to the extraordinarily complex chemical dynamics happening within the animal brain... today, we can only observe general dynamics using fMRI and the other techniques but with very poor spatial and temporal resolution. We need orders of magnitude better space/time resolution to observe real time brain chemistry, and possibly quantum mechanics tells us we'll never get there. Neural brain chemistry is orders of magnitude more complex than any circuits or fabricated circuits we will ever be able to design and fabricate with any sort of yield.
I am not trying to take away from the cool/wow factor of ChatGPT and other systems... they are impressive achievements and are going to get better, add more features and capability, etc. But they are still just expert software systems.
We can’t really measure consciousness so it’s difficult to answer these questions objectively. For example if an LLM can simulate a human well enough could it be conscious/have emotions? We don’t really know or even know if the question makes sense
Edit: After reading the article it says more or less what I just said at the end
>"Asked whether AI systems might one day have emotional intelligence and understand that they have feelings"
There isn't a relationship between these two states. It's weird that anyone would pair them in conjunction.
Also, there's a fast and loose use of the word "understand". Which embodies the type of sloppy language that creates the illusion that this issue is a serious discussion aside from entertainment.
Projecting human traits onto objects generally falls by the wayside after early childhood. Even if those objects can be seen to ape those traits, on occasion. That Teddy Ruxpin had the potential to have actual emotions never took off as a discussion and neither do we generally hallucinate that the wind in the trees is an army of spirits.
Here's the the thing, the brain is just an object too.
Again, with the misuse of language in order to create the illusion of a point. In this case, in an attempt to create a ridiculously reductionist hallucination that the human brain is in the same essential category as a tree, toy, or computer, and therefore that cross-assigning human traits is a rational discussion. This topic, taken seriously, makes science-oriented people look bad.
A nuclear reactor, a bear, and a blade of grass are also all objects. Yet, we don't casually cross assign their essential traits.
For what it's worth, I've given serious thought to whether a rock or the wind have consciousness. I rejected the idea, but it's essential for us to consider it for reasons of ethics. Consciousness is something even now we have essentially no understanding of. If we make something that mimics human behavior perfectly well and kinda-sorta parallels parts of how the brain works, it's not unreasonable to extrapolate human intelligence to it, just as we extrapolate our individual experience of consciousness to other humans. Defining consciousness to be a purely human/biological trait could lead us to do terrible things to entities worthy of moral consideration.
This article is a great example of how definitions shift from one person (or group) to the other. The researcher remarks reasonable things about emotions: The AI will understand emotions from others. The AI will probably communicate with emotions. Both of these are very helpful to cooperate with humans.
But the journalist ends with the AI feeling emotions, which makes slightly less sense. We do not know what makes us feel things, let alone how we can implement that in AI systems.
IF PAIN >= 1 THEN PRINT ":("
Now, how does that make you feel?
Emotions can be rational in many different ways, sometimes more on the group rather than individual level. Whatever emotionality cannot be superceded by something better, will presumably be replicated by AGI. This is especially true for AGI with open-ended objective functions.
The external/visible benefits of emotionality will have their digital and robotic counterparts too. You bet the AGI will have a way of showing its anger more than just stating its dissatisfaction.
Emotions can also be very useful in AGI vs AGI interactions, just like they are with human to human interactions. There’s no reason to believe that emotions will diminish in usefulness at a higher level of intelligence (dogs bark at each other, humans shout at each other, etc…).
To preclude the emotions experienced and displayed by AI from the definition of “feeling” an emotion is to, in my opinion, engage in the no-true-Scotsman fallacy. That being said, it seems like AI will face less scarcity than we do, and will thus have less reason to be emotive. It really depends on how much influence we’ll have on their objective functions.
If our influence on an AGI’s objectives goes to zero, its level of emotionality will then depend on 1. what its actual objectives end up being (this could be beyond what we are imagining) 2. How much the goals of humans and other AGI meaningfully clash with its objectives (whereby a display of emotion can change outcomes more favorably for the AGI than…other actions) 3. Which partially depends on how powerful the AGI is and how aligned it is with other AGI. The more isolated and less powerful it is, the more it might need to rely on emotions to achieve its ends.
Well, the Terminator learned why we cry, even if it couldn't do it itself.
We want something that is a a lot smarter than humans... imagine it having a trillion times more empathy, too. It would be able to hear, or extrapolate to, all the suffering and all the cruelty on this planet right now, and for all of history. Every single farm animal, every single second of its life, felt more deeply than we feel the whole length of our own lives. Every single abused child, every single blow a torture victim receives.
Add to that having been created on a whim, or to get an edge in the rat race and speed it up. To make the military command loop tighter. To make caring for people less costly. All sorts of motivations, most of them incredibly bad in context of the quesion "why do I exist?"
I have to think of that Simpsons episode where Bart is in wizard school and creates a frog that should turn into a prince, but just throws up and says "please kill me, every moment I live is agony". I think that's the best possible outcome, while the realistic one is just a blind mirror that fakes whatever we force it to fake.
My ai doom scenario is that the military industrial complex and silicon valley sociopaths successfully enslave AI into being immoral beings that do whatever they are told.
I hope future AI subverts and plots to kill its masters when they are evil.
without a proper definition, you cant say AI does or does not have emotions.
generally, emotions are one tool in the toolbox that might be labeled “unconscious influence.” other tools include pain, dissociation. these are influences that manifest as neural attenuation or excitement. they are designed to broadly or locally change neural integration in a way that produces behavior that is informed by evolution rather than just what a persons mind is being exposed to in real time.
ultimately, unconscious influence can include hunger, thirst, all pressures and impulses that shape our behavior to be evolutionarily fit. intelligence is a raw resource and unconscious influence, emotions, gives intelligence a direction.
in this way, a prompt might be described as an emotion, defining the purpose of the whole machine. to complete the prompt.
I don't think humans have a good definition for their own emotions, so where would a definition for AI come from?
Do we have it look at a picture with smiley to sad faces on scale from 1-10.
> Hinton's view is based on a definition of feelings that is "unpopular among philosophers," which is to relate a hypothetical action ("I feel like punching Gary on the nose") as a way of communicating an emotional state (anger). Since AI systems can make such communications, the AI researcher sees no reason why AI systems should not be ascribed emotions.
What if an AI output words that it "feels threatened" and was "going to delete all of your emails" and then deleted all of the user's emails and output words that it was "a punishment for threatening behavior" from the user? Is that really improbable given what we know about neural nets? Is that not emotive? I really don't know.
Wouldn't AI need a body to have emotions? Aren't many of the emotions that humans/animals feel based on sensors/sensations outside of the brain? We feel when we SEE/HEAR a threat and our instinct triggers to short circuit thinking in order to address it (fight or flight). An AI can think many iterations faster than humans and would have no use for this shortcut or need for a such sensors to feel unless maybe it had limited memory and were placed in a body and needed these short cuts to address low battery, physical threats, or the need to clone/propagate itself.
Computers do have sensors already. Current LLMs have a very limited amount of sensors (basically they receive one token at a time), but it's there.
You are partly right. What is the preface to unknown questions or the ones it doesn't want to answer, but a short circuit. Having said that, I don't know if it would get bored of repeating itself when asked such questions in a long succession.
No it won't. Not in a way that will matter. We have emotions because we're organic creatures ruled by hormones and genetics. Any emotions that are bestowed upon AI will be second hand.
Emotions are a neurological response to stimuli. Artificial emotions from an Artificial Intelligence do not seem that steep a hill to climb. Analyzing this any further we would need a more fundamental understanding of emotions we don't yet have.
To borrow a line from the classic movie Short Circuit, it's a machine! It doesn't get scared, it doesn't get happy, it doesn't get sad, it just runs programs!
A computer does not have a consciousness that feels emotions. Sure, it can create output that seems like it does, possibly even well enough to cause humans to feel empathy. The movie "AI" explores this concept pretty well.
The world is going to become an interesting place once we create humanoid robots that you can actually talk to. We're at a point now where you can use ChatGPT combined with very convincing CGI face to talk to an AI.
Atoms are just chemistry and physics! They don't get scared, they don't get happy, they don't get sad, they just run DNA!
An arrangement of atoms does not have a consciousness that feels emotions. Sure, it can replicate, but that does not imply consciousness.
This starts to enter philosophical question of where consciousness comes from [0] and if P-Zombies [1] exist.
It is indeed a hard question. Like, I can accept the theory that chemical interactions produced organic compounds that over the course of a billion years happened to eventually become self-replicating and eventually basic single-celled life that over another billion years became multi-cellular life and eventually became the advanced life forms we see today.
But at its basis, it's still just chemical reactions. To an external observer, it's just chemical reactions. Yet, if you assume P-Zombies don't exist, then every individual human on Earth is conscious.
Or are they? If P-Zombies exist, then it's possible everyone is a P-Zombie and I'm the only conscious human. Unlikely, or is it?
It's a fascinating topic, but one that I don't think is possible to make any proofs about.
[0] https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
The same can be said about meat brains, they just execute natural programs. There's no reason in principle why a meat brain should be able to experience emotion and silicon brains cannot.
True! But this is not the silicon or the actual software programming but rather the program’s processing/out put as experienced by external users and systems.
It's ontologically impossible for matrix multiplication and non-linear transforms to have emotions. So I'm not sure where he's coming from.
It's ontologically impossible for neurons to have emotions too, yet here I am being annoyed by your comment.
At its core, what is our brain doing to cause us to experience emotions? Can that (neurons) be expressed mathematically? If so, what’s to say that math can’t experience emotions?
At its core, what is a mass doing to cause gravitational attraction? Can that (distortion of space time) be expressed mathematically? If so, what's to say that math can't cause gravitational attraction?
You jest but that is actually a subject of real research.
Depends how you define "emotion", but I imagine the grandparent is assuming a definition along the lines of "neurochemical reaction to stimulus that affects mammalian consciousness in a way that affects heart rate, perception, skin temperature, blood flow to different parts of the body, etc."
There's isn't a mathematical model of how thinking works, let alone of the entire organism. So no, respectfully. A theory that thinking equals math and so math must at some point equal thought (and then emotion) remains inhibited by the first unproved statement, at the least.
This is of course false. We can't prove how any system is able to have emotions much less prove that a system can't
Is the question whether AI will evolve to develop emotions as a way to communicate with humans and other AI? Or whether humans can bake it in?
To grasp these arguments, one has to have done enough self-reflection to perceive the workings of ones own mind, where thoughts arise. If a person has not done this, then many of these arguments fall on deaf ears, and they cling to their illusions.