Are Large Language Models Sentient?

1 min read Original article ↗

Press enter or click to view image in full size

Photo by DS stories

What we actually mean when we ask that question

Naim Kabir

Google just suspended the engineer Blake Lemoine for publishing conversations with the company’s chatbot development system, LaMDA.

According to Lemoine, these conversations are evidence that the system is sentient. Google disagreed, citing that there’s plenty of evidence against the claim of sentience.

This all strikes me as rather odd, mainly because the question of sentience is an unfalsifiable one. All the evidence in the world can’t prove the presence or absence of it—making it a useless technical question to pose in the first place.

All the evidence in the world can’t prove the presence or absence of sentience

It’s fun for a philosophical faff at the ol’ Parisian salon, sure, but not worthy of any serious energy. Especially not institutional energy.

Press enter or click to view image in full size

“And that, Bob, is why we should give logistic regressions legal representation.” Photo by Shane Rounce on Unsplash

Many of you might think it is in fact the most important question to ask, and I understand where you’re coming from. The notion of sentience seems crucial for thinking about ethics, fairness, and rights.