Monday, May 1st 2023

"Godfather of AI" Geoffrey Hinton Departs Google, Voices Concern Over Dangers of AI

Geoffrey Hinton, British-Canadian psychologist, computer scientist, and 2018 Turing Award winner in deep learning, has departed the Google Brain team after a decade-long tenure. His research on AI and neural networks dating back to the 1980s has helped shape the current landscape of deep learning, neural processing, and artificial intelligence algorithms with direct and indirect contributions over the years. 2012's AlexNet, designed and developed in collaboration with his students Alex Krizhevsky and Ilya Sutskever, formed the modern backbone of computer vision and AI image recognition used today in Generative AI. Hinton joined Google when the company won the bid for the tiny startup he and his two students formed in the months following the reveal of AlexNet. Ilya Sutskever left their cohort at Google in 2015 to become co-founder and Chief Scientist of OpenAI; creators of ChatGPT and one of Google's most prominent competitors.

In an interview with the New York Times Hinton says that he quit his position at Google so that he may speak freely about the risks of AI, and that a part of him regrets his life's work in the field. He said that during his time there Google has acted as a "proper steward" of AI development, and was careful about releasing anything that might be harmful. His viewpoint on the industry shifted within the last year as Microsoft's Bing Chat took shots at Google's core business, the web browser, leading to Google being more reactionary than deliberate in response with Bard. The concern arises that as these companies battle it out for AI supremacy they won't take proper precautions against bad-faith actors using the technologies to flood the internet with false photos, text, and even videos. That the average person would no longer be able to tell what was real, and what was manufactured by AI prompt.
Hinton believes that the latest systems are starting to encroach on or even eclipse Human capabilities, telling the BBC that, "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that. Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be." Hinton thinks that as the systems improve, newer generations of AI could become more dangerous. These new generations may exhibit better learning capabilities from the larger data sets they use, which could lead to the AI generating and running its own code or setting its own goals. "The idea that this stuff could actually get smarter than people—a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

Dr. Hinton admits that at 75 years old it's time to retire, and makes it very clear that he in no way quit so that he could criticize Google or the team he worked with for over 10 years. He tells the BBC that he has, "some good things" to say about Google and their approach to AI, but that those comments would be more credible if he did not work for them.
Source: New York Times
Add your own comment

30 Comments on "Godfather of AI" Geoffrey Hinton Departs Google, Voices Concern Over Dangers of AI

#26
Guwapo77
the54thvoidThat's not sound reasoning. Guns were the portable evolution of cannons (an evolution of Chinese fire lances). They were never created for the purpose of hunting, rather, they were invented to win battles. AI is not such a thing; it is simply an entity in its own right, with an exceptionally diverse field of application. Most of which require to be heavily controlled to avoid the consequences.

IMO, AI created in our world will be no different than the people and nations that create it. Greedy, ideological, and/or altruistic. There will be bad AI and good AI. Problem is, a good AI probably wouldn't have the ethics to beat a bad AI. Story of life.
Not the greatest of analogies I admit, but you get the point. AI will be used for bad and that's a fact no matter how good or how bad the analogy will be.
Posted on Reply
#27
bug
the54thvoidI see the bigger picture. I was inferring the 'guns' metaphor wasn't appropriate given that they were created for a purpose intended to harm others. AI isn't the same. It's end result will be determined by those who guide it.
Even that is inaccurate. Can you definitely say guns were invented to harm others and to protect one's own?

But again "guns" was just a word in an analogy. Let it go, take a step back and think about the analogy instead.
Posted on Reply
#28
AusWolf
Prima.VeraI think all the AI doomsayers are overreacting and exaggerating too much. AI is not dangerous, it just uses the free public available info online.
What is dangerous is the tools such as deepfake, voiceovers, etc, that can be used in malicious purposes, so that can be regulated.
For me, the tools such as BARD, ChatGPT and BingChat are tremendous helpful in my work, helping me complete tasks in 1H, that usually were taking 1 day or more to achieve.
It all depends how you use those tools.
And that's why it's dangerous. If something can be used for something bad, people will find a way. Besides, free public info is not necessarily (and a lot of times isn't) correct. Not long, and we'll live in a world where literally everything is fake, and it's impossible to tell what the truth about any subject is.
Posted on Reply
#29
Aquinus
Resident Wat-man
AusWolfAnd that's why it's dangerous. If something can be used for something bad, people will find a way. Besides, free public info is not necessarily (and a lot of times isn't) correct. Not long, and we'll live in a world where literally everything is fake, and it's impossible to tell what the truth about any subject is.
I hate how true this statement could possibly become because it's already getting pretty bad. With that said, it's a bunch of intellectual dishonesty and people not knowing how to critically think. I would like to think that given all available information on a subject and an as objective view as possible, an individual with some level of intellect should be able to make some determinations based on real world observations. Maybe that's asking a lot from the modern day individual though.
Posted on Reply
#30
AusWolf
AquinusI hate how true this statement could possibly become because it's already getting pretty bad. With that said, it's a bunch of intellectual dishonesty and people not knowing how to critically think. I would like to think that given all available information on a subject and an as objective view as possible, an individual with some level of intellect should be able to make some determinations based on real world observations. Maybe that's asking a lot from the modern day individual though.
I think the modern day individual has proven that anything is believable if a guy wearing a suit says it on TV. One of my neighbours still wears a surgical mask to this day while sitting alone in his car. There's way too much fakery going on even these days. With AI, it'll spin completely out of control, I'm afraid.
Posted on Reply
Add your own comment
May 16th, 2024 05:04 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts