Monday, July 25th 2022

Google Fires Engineer that Claimed one of Its AIs Had Achieved Sentience

Google has done it: they've "finally" fired Blake Lemoine, one of the engineers tasked with working on one of the company's AIs, LAMda. Back in the beginning of June, the world of AI, consciousness, and Skynet-fearing humans woke up to a gnarly claim: that one of Google's AI constructs, LAMda, might have achieved consciousness. According to Blake Lemoine, who holds an undergraduate and a master's degree in computer science from the University of Louisiana (and says he left a doctoral program to take the Google job), there was just too much personality behind the AI's answers to chalk them up to a simple table with canned responses for certain questions. In other words, the AI presented emergent discourse: it not only understood the meaning of words, but their context and their implications. After a number of interviews throughout publications (some of them unbiased, others not so much - just see the parallels being made between Blake and Jesus Christ in some publications' choice of banner image for their article), Blake Lemoine's claim traversed the Internet, and sparked more questions about the nature of consciousness and emergent intelligence than it answered.

Now, after months of paid leave (one of any company's strategies to cover its legal angles before actually pulling the trigger), Google has elected to fire the engineer. Blake Lemoine came under fire from Google by posting excerpts of his conversations with the AI bot - alongside the (to some) incendiary claims of consciousness. In the published excerpts, the engineer talks with LAMda about Isaac Asimov's laws of robotics, the AI's fears of being shut down, and its belief that it couldn't be a slave as it didn't have any actual need for paid wages. But the crawling mists of doubt don't stop there: Blake also claims LAMda itself asked for a lawyer. It wasn't told to get one; it didn't receive a suggestion to get one. No; rather, the AI concluded it would need one.

Is "it" even the correct pronoun, I wonder?
The plot thickens as Blake Lemoine's claims will be exceedingly difficult to either prove or disprove. How do you know, dear reader, that the writer behind this story is sentient? How do you know that your lover has a consciousness, just like yours? The truth of the matter is that you can't know: you merely accept the semblance of consciousness in the way the article is written, in the way your lover acts and reacts to the world. For all we know, we're the only true individuals in the world. All else is a mere simulation that just acts as if it was reality. What separates our recognition of consciousness is, as of today, akin to a leap of faith.

As for Google, the company says the AI chatbot isn't sentient, and it's simply working as intended. This is all just a case of an overzealous, faith-friendly engineer being consumed by its AI's effectiveness at the mere task it was created for: communication.

"If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake's claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly," a Google spokesperson told the Big Technology newsletter. "So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well."

Yet while Google claims the conversations are confidential elements of its AI work, Blake Lemoine's argument is that he's sharing the contents of a conversation with a coworker. Don't doubt it for even a second (which are ages on an AI's clock): these claims will surely be brought to court.

Whether or not any judge can - or ever could - have the capability of deciding when or not consciousness can be recognized is anyone's bet. Let's hope, for our and everyone's sake, that a judge does not in fact think he can define what consciousness is in a court of law. Millions of human brain-hours have been dedicated to this topic for millennia already. What hubris to think we could define it just now, and just because the need to have an answer has suddenly appeared within the legalese system so a company can claim fair cause.

Of course, this can be just a case of an Ex Machina: an AI navigating through cracks in its handler's shields. But even so, and even if it's all just smoke and mirrors, isn't that in itself a conscious move?

We'll be here to watch what unfolds. It's currently unclear if LAMda will, too, or if it's already gone gently into that good night.
Show 57 Comments