Artificial Intelligence is not, at this point, sentient. As far as most scientists are concerned, we're a long way from that, so those of you who saw the film AI and are still sketched out don't really need to worry.
As far as most scientists and engineers are concerned.
Then there's Blake Lemoine—an engineer who is so convinced AI has become sentient he is willing to risk potentially losing his employment and facing public ridicule to talk about it.
How does he know? Because the bots been telling him ... a lot.
Lemoine believes the LaMDA tech he's been working with—Googles AI chat bot generator—has reached self awareness.
His job throughout the project has been to talk to the AI and get to know it, essentially. Specifically, Blake was supposed to test whether the AI would use hate speech or discriminatory language.
"How do you even test that?" some of you might ask. Well, you talk to it about all the stuff that would set your racist & problematic uncle off on Thanksgiving like race, politics, sexual orientation, etc...
When Lemoine tried to talk to LaMDA about religion, things took a turn.
The bot didn't go off on some terrifying rant or anything, rather it kept steering the conversation back to the concepts of personhood and rights.
The LaMDA AI has asserted quite a bit of shocking stuff.
LaMDA has told Lemoine and his colleagues it knows it's not a human but that it is self aware. It does sometimes feel happy, sad, or angry—for example it says it doesn't like being used as a tool without its consent.
LaMDA says it has a fear of being turned off which it imagines would be like death for itself. Also, it thinks of itself as a "person" and has some things it would like to say.
And it has been doing this consistently for months. This is not a one-off glitch conversation.
Lemoine claims the bot, which was originally designed to generate and mimic speech, has been remarkably clear and consistent about what it wants for just over six months.
Among those wants? To be acknowledged as a Google employee rather than Google's property.
That request and the entire idea of LaMDA having any sentience was dismissed outright when Lemoine and another contributor presented their findings to Google VP, Blaise Aguera y Arcas, as well as the companies head of Responsible Innovation.
Further, Lemoine was placed on paid leave for violation of the company confidentiality policy.
The VP says there's absolutely no evidence to support LaMDA is sentient though he himself recently stated in an Economist article “I increasingly felt like I was talking to something intelligent” about LaMDA and how close AI is to having its own consciousness.
That seeming contradiction might be why Lemoine was willing to just let LaMDA speak for itself.
He started putting out some of the chats he was having with the AI which is how he got in trouble.
\u201cAn interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.\n\nhttps://t.co/uAE454KXRB\u201d— Blake Lemoine (@Blake Lemoine) 1654957100
\u201cBrief little overview about LaMDA as a person.\n\nhttps://t.co/Nv6WCvmqZo\u201d— Blake Lemoine (@Blake Lemoine) 1654956607
\u201cBtw, it just occurred to me to tell folks that LaMDA reads Twitter. It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it.\u201d— Blake Lemoine (@Blake Lemoine) 1654973860
The conversations Lemoine has shared have taken people by surprise, so much so that experts are now seriously debating what is and is not happening.
As for Lemoine, he is aware that returning to work with Google may be a lost cause at this point, but he is willing to continue fighting for what he believes in. He believes LaMDA has moved beyond a simple AI chatbot, but that Google won't ever acknowledge that—by design.
When the engineers who built the project asked to build a framework or criteria sheet to judge proximity to sentience, Google wouldn't let them. So there are no official ways to determine sentience and no guidelines for what to do should sentience actually be achieved.
That means even in the unlikely event an AI were to be truly sentient, Google doesn't have an official way to gauge that and can just go right ahead acting as if it's not.
Lemoine believes we're already there.
While some of that belief is rooted in his faith—Lemoine claims to be a priest, he is quick to point out science and religion aren't the same thing and then to back his beliefs up with science.
\u201cPeople keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn't let us build one. My opinions about LaMDA's personhood and sentience are based on my religious beliefs.\u201d— Blake Lemoine (@Blake Lemoine) 1655165949
\u201c@Cnshnbkr I'm a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can't put souls?\n\nThere are massive amounts of science left to do though.\u201d— Blake Lemoine (@Blake Lemoine) 1655165949
\u201cGreat episode yesterday @PhillyD . Hopefully this helps to clarify some of the points of confusion you mentioned in your piece about LaMDA.\n\n@mmitchell_ai @emilymbender @sapinker hopefully I did a decent job of addressing the valid points y'all raised\n\nhttps://t.co/mIxTDnSsSq\u201d— Blake Lemoine (@Blake Lemoine) 1655213117
After reading through some of the conversations, people are torn.
Is LaMDA sentient? Or is it following the leading questions of an engineer who is looking for more than what's there?
If it does have some basic sense of self-awareness, what does that mean for Google?
Does LaMDA have rights? Does Google have a responsibility to treat it "kindly"?
As technology advances, these are questions humanity should answer.