Geoffrey Hinton, British-Canadian psychologist, computer scientist, and 2018 Turing Award winner in deep learning, has departed the Google Brain team after a decade-long tenure. His research on AI and neural networks dating back to the 1980s has helped shape the current landscape of deep learning, neural processing, and artificial intelligence algorithms with direct and indirect contributions over the years. 2012's AlexNet, designed and developed in collaboration with his students Alex Krizhevsky and Ilya Sutskever, formed the modern backbone of computer vision and AI image recognition used today in Generative AI. Hinton joined Google when the company won the bid for the tiny startup he and his two students formed in the months following the reveal of AlexNet. Ilya Sutskever left their cohort at Google in 2015 to become co-founder and Chief Scientist of OpenAI; creators of ChatGPT and one of Google's most prominent competitors.
In an interview with the New York Times Hinton says that he quit his position at Google so that he may speak freely about the risks of AI, and that a part of him regrets his life's work in the field. He said that during his time there Google has acted as a "proper steward" of AI development, and was careful about releasing anything that might be harmful. His viewpoint on the industry shifted within the last year as Microsoft's Bing Chat took shots at Google's core business, the web browser, leading to Google being more reactionary than deliberate in response with Bard. The concern arises that as these companies battle it out for AI supremacy they won't take proper precautions against bad-faith actors using the technologies to flood the internet with false photos, text, and even videos. That the average person would no longer be able to tell what was real, and what was manufactured by AI prompt.
Hinton believes that the latest systems are starting to encroach on or even eclipse Human capabilities, telling the BBC that, "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that. Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be." Hinton thinks that as the systems improve, newer generations of AI could become more dangerous. These new generations may exhibit better learning capabilities from the larger data sets they use, which could lead to the AI generating and running its own code or setting its own goals. "The idea that this stuff could actually get smarter than people—a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."
Dr. Hinton admits that at 75 years old it's time to retire, and makes it very clear that he in no way quit so that he could criticize Google or the team he worked with for over 10 years. He tells the BBC that he has, "some good things" to say about Google and their approach to AI, but that those comments would be more credible if he did not work for them.
View at TechPowerUp Main Site | Source
In an interview with the New York Times Hinton says that he quit his position at Google so that he may speak freely about the risks of AI, and that a part of him regrets his life's work in the field. He said that during his time there Google has acted as a "proper steward" of AI development, and was careful about releasing anything that might be harmful. His viewpoint on the industry shifted within the last year as Microsoft's Bing Chat took shots at Google's core business, the web browser, leading to Google being more reactionary than deliberate in response with Bard. The concern arises that as these companies battle it out for AI supremacy they won't take proper precautions against bad-faith actors using the technologies to flood the internet with false photos, text, and even videos. That the average person would no longer be able to tell what was real, and what was manufactured by AI prompt.
Hinton believes that the latest systems are starting to encroach on or even eclipse Human capabilities, telling the BBC that, "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that. Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be." Hinton thinks that as the systems improve, newer generations of AI could become more dangerous. These new generations may exhibit better learning capabilities from the larger data sets they use, which could lead to the AI generating and running its own code or setting its own goals. "The idea that this stuff could actually get smarter than people—a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."
Dr. Hinton admits that at 75 years old it's time to retire, and makes it very clear that he in no way quit so that he could criticize Google or the team he worked with for over 10 years. He tells the BBC that he has, "some good things" to say about Google and their approach to AI, but that those comments would be more credible if he did not work for them.
View at TechPowerUp Main Site | Source