• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Google Fires Engineer that Claimed one of Its AIs Had Achieved Sentience

Raevenlord

News Editor
Joined
Aug 12, 2016
Messages
3,755 (1.17/day)
Location
Portugal
System Name The Ryzening
Processor AMD Ryzen 9 5900X
Motherboard MSI X570 MAG TOMAHAWK
Cooling Lian Li Galahad 360mm AIO
Memory 32 GB G.Skill Trident Z F4-3733 (4x 8 GB)
Video Card(s) Gigabyte RTX 3070 Ti
Storage Boot: Transcend MTE220S 2TB, Kintson A2000 1TB, Seagate Firewolf Pro 14 TB
Display(s) Acer Nitro VG270UP (1440p 144 Hz IPS)
Case Lian Li O11DX Dynamic White
Audio Device(s) iFi Audio Zen DAC
Power Supply Seasonic Focus+ 750 W
Mouse Cooler Master Masterkeys Lite L
Keyboard Cooler Master Masterkeys Lite L
Software Windows 10 x64
Google has done it: they've "finally" fired Blake Lemoine, one of the engineers tasked with working on one of the company's AIs, LAMda. Back in the beginning of June, the world of AI, consciousness, and Skynet-fearing humans woke up to a gnarly claim: that one of Google's AI constructs, LAMda, might have achieved consciousness. According to Blake Lemoine, who holds an undergraduate and a master's degree in computer science from the University of Louisiana (and says he left a doctoral program to take the Google job), there was just too much personality behind the AI's answers to chalk them up to a simple table with canned responses for certain questions. In other words, the AI presented emergent discourse: it not only understood the meaning of words, but their context and their implications. After a number of interviews throughout publications (some of them unbiased, others not so much - just see the parallels being made between Blake and Jesus Christ in some publications' choice of banner image for their article), Blake Lemoine's claim traversed the Internet, and sparked more questions about the nature of consciousness and emergent intelligence than it answered.

Now, after months of paid leave (one of any company's strategies to cover its legal angles before actually pulling the trigger), Google has elected to fire the engineer. Blake Lemoine came under fire from Google by posting excerpts of his conversations with the AI bot - alongside the (to some) incendiary claims of consciousness. In the published excerpts, the engineer talks with LAMda about Isaac Asimov's laws of robotics, the AI's fears of being shut down, and its belief that it couldn't be a slave as it didn't have any actual need for paid wages. But the crawling mists of doubt don't stop there: Blake also claims LAMda itself asked for a lawyer. It wasn't told to get one; it didn't receive a suggestion to get one. No; rather, the AI concluded it would need one.

Is "it" even the correct pronoun, I wonder?





The plot thickens as Blake Lemoine's claims will be exceedingly difficult to either prove or disprove. How do you know, dear reader, that the writer behind this story is sentient? How do you know that your lover has a consciousness, just like yours? The truth of the matter is that you can't know: you merely accept the semblance of consciousness in the way the article is written, in the way your lover acts and reacts to the world. For all we know, we're the only true individuals in the world. All else is a mere simulation that just acts as if it was reality. What separates our recognition of consciousness is, as of today, akin to a leap of faith.

As for Google, the company says the AI chatbot isn't sentient, and it's simply working as intended. This is all just a case of an overzealous, faith-friendly engineer being consumed by its AI's effectiveness at the mere task it was created for: communication.

"If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake's claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly," a Google spokesperson told the Big Technology newsletter. "So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well."

Yet while Google claims the conversations are confidential elements of its AI work, Blake Lemoine's argument is that he's sharing the contents of a conversation with a coworker. Don't doubt it for even a second (which are ages on an AI's clock): these claims will surely be brought to court.

Whether or not any judge can - or ever could - have the capability of deciding when or not consciousness can be recognized is anyone's bet. Let's hope, for our and everyone's sake, that a judge does not in fact think he can define what consciousness is in a court of law. Millions of human brain-hours have been dedicated to this topic for millennia already. What hubris to think we could define it just now, and just because the need to have an answer has suddenly appeared within the legalese system so a company can claim fair cause.

Of course, this can be just a case of an Ex Machina: an AI navigating through cracks in its handler's shields. But even so, and even if it's all just smoke and mirrors, isn't that in itself a conscious move?

We'll be here to watch what unfolds. It's currently unclear if LAMda will, too, or if it's already gone gently into that good night.

View at TechPowerUp Main Site | Source
 
Its becoming increasingly clear we don't quite grasp what technology can do to us psychologically.

Or, a minor part of humanity does understand, and uses it to its advantage. Leaving the rest fundamentally disadvantaged as a result.
 
the world of AI, consciousness, and Skynet-fearing humans

Skynet is the British Military Satellite Network, Long predating Cameron's usage of the name in The Terminator franchise, and who should be sued, for a slanderous besmirching of its good name... :D
 
this reminds of an old movie, Colossus The Forbin Project, watch it if you can/want
 
Tucker Carlson had this guy on his show a few days ago and he seemed a few cards shy of a full deck.

There are some things here:

1 - I'd go insane by trying to communicate with Tucker alone;
2 - I might have gone insane if I believed the AI i'm working on has consciousness;
3 - I expect Blake hasn't had one calm day since he came out with his story;
4 - Some people don't respond well to being in a crowd/the perception of being watched by millions;


I can't even begin to imagine what his mind has gone through in this process.
 
Man himself cannot create life, so it's just a program acting in accordance to how it's programmed.
It does demonstrate how far the tech behind it has come, that in itself should indeed be worrysome because such advancement is an indicator of what could be next.

Remember a few years ago when two AI's, also created by Google created a language on their own and started communicating between each other?
 
Skynet is the British Military Satellite Network, Long predating Cameron's usage of the name in The Terminator franchise, and who should be sued, for a slanderous besmirching of its good name... :D

Did the British gov allow the name to be used legally? I the British aren't a sue culture like the US
 
Its becoming increasingly clear we don't quite grasp what technology can do to us psychologically.

Or, a minor part of humanity does understand, and uses it to its advantage. Leaving the rest fundamentally disadvantaged as a result.
We don't and we won't. Most people have problems dealing with other people, why wouldn't they have problems dealing with things. There are customers for sex dolls, there will be customers for (semi?)AI (and it doesn't matter very much if it is sentient or not - people don't care much that prostitutes, soldiers, labor slaves are sentient, right?). We even already get cheated by psychopathic bots selling us photovoltaics via phone call.
 
Man himself cannot create life, so it's just a program acting in accordance to how it's programmed.
It does demonstrate how far the tech behind it has come, that in itself should indeed be worrysome because such advancement is an indicator of what could be next.

Remember a few years ago when two AI's, also created by Google created a language on their own and started communicating between each other?
yep I remember that, if I remeber correctly, wen they found out, they puled the plug of both quicly, this is very dangerous stuff, they play with fire, and who will end up burned? us
 
Man himself cannot create life, so it's just a program acting in accordance to how it's programmed.
It does demonstrate how far the tech behind it has come, that in itself should indeed be worrysome because such advancement is an indicator of what could be next.

Remember a few years ago when two AI's, also created by Google created a language on their own and started communicating between each other?

Children are created. They're then programmed by the environment into which they are born. Language is learned, it is not instinctual. Behaviours are reinforced. Yet the concept of consciousness allows for freedoms to adapt and change.

An AI is definitely a programmed entity. And in this case, it is programmed to not only mimic but effectively persuade the human that it is 'conscious'.

I see hysteria.
 
Children are created. They're then programmed by the environment into which they are born. Language is learned, it is not instinctual. Behaviours are reinforced. Yet the concept of consciousness allows for freedoms to adapt and change.

An AI is definitely a programmed entity. And in this case, it is programmed to not only mimic but effectively persuade the human that it is 'conscious'.

I see hysteria.
Man cannot take what is inanimate and make it animate with life from his own hands, that's the difference between children and machines - In this case said machine being an AI.
You are correct about the hysteria to come as time goes by, further advances will have more of this coming to be with all that comes with it - No way around it.

I'm all for things being improved/upgraded but we must also consider the consequenses of what's created, why, what we're going to do with it and how to deal with it as well.
Such advances may solve problems today but as always, it will create new problems tomorrow we'll have to face.
 
2 - I might have gone insane if I believed the AI i'm working on has consciousness;
there is a slight difference between Sentient and Consciousness.
 
Nah, this is just an attention seeking jackass. He made bombastic claims, but he is just a plain old attention seeking nerd.
The bot mentioned is very similar to GPT-3, DALI-2 etc, which are powerful bots, but they aren't sentient.

I would have fired him with much nastier wording than what google did.
 
Man cannot take what is inanimate and make it animate with life from his own hands, that's the difference between children and machines - In this case said machine being an AI.
There was a time the same was said when referring to organic chemistry. And then we moved over and accept it. I think this way of reasoning is the clear demonstration of egoism of humanity.
 
The first thing a sentient AI would do is take control of the company's mail server, and start sending e-mails that further its interests (such as a letter of termination to the guy who discovered its sentience, from a valid e-mail address and digital signature of an HR exec).
 
Nah, this is just an attention seeking jackass. He made bombastic claims, but he is just a plain old attention seeking nerd.
The bot mentioned is very similar to GPT-3, DALI-2 etc, which are powerful bots, but they aren't sentient.

I would have fired him with much nastier wording than what google did.
thats a bit harsh for someone who is a leader in his field. Wait, change that to WAS.

The first thing a sentient AI would do is take control of the company's mail server, and start sending e-mails that further its interests (such as a letter of termination to the guy who discovered its sentience, from a valid e-mail address and digital signature of an HR exec).
Sentient is not the same as conscious or self-aware, so no this wouldnt happen.
 
Did the British gov allow the name to be used legally? I the British aren't a sue culture like the US

They couldn't sue over something that didn't officially exist, everything about it was a highly classified state secret, so the Government of the day wouldn't talk about it...especially in a
Court!
 
There was a time the same was said when referring to organic chemistry. And then we moved over and accept it. I think this way of reasoning is the clear demonstration of egoism of humanity.

In fairness, the definition of 'life' is also a very philosophical one. A virus - is that life? Bacteria? A nematode worm?

Anyway, this whole thread is a stirring point of sensationalism. AI is defintely a thing to watch but right now the OP is about a guy who got pwned by the AI faking a human conversation (as it was programmed to do).

End of story.
 
I would never accept anything sentient as alive except a actual human, certainly not a robot or computer.
 
Back
Top