Wednesday, March 20th 2024

NVIDIA CEO Jensen Huang: AGI Within Five Years, AI Hallucinations are Solvable

After giving a vivid GTC talk, NVIDIA's CEO Jensen Huang took on a Q&A session with many interesting ideas for debate. One of them is addressing the pressing concerns surrounding AI hallucinations and the future of Artificial General Intelligence (AGI). With a tone of confidence, Huang reassured the tech community that the phenomenon of AI hallucinations—where AI systems generate plausible yet unfounded answers—is a solvable issue. His solution emphasizes the importance of well-researched and accurate data feeding into AI systems to mitigate these occurrences. "The AI shouldn't just answer; it should do research first to determine which of the answers are the best," noted Mr. Huang as he added that for every single question, there should be a rule that makes AI research the answer. This also refers to Retrieval-Augmented Generation (RAG), where LLMs fetch data from external sources, like additional databases, for fact-checking.

Another interesting comment made by the CEO is that the pinnacle of AI evolution—Artificial General Intelligence—is just five years away. Many people working in AI are divided between the AGI timeline. While Mr. Huang predicted five years, some leading researchers like Meta's Yann LeCunn think we are far from the AGI singularity threshold and will be stuck with dog/cat-level AI systems first. AGI has long been a topic of both fascination and apprehension, with debates often revolving around its potential to exceed human intelligence and the ethical implications of such a development. Critics worry about the unpredictability and uncontrollability of AGI once it reaches a certain level of autonomy, raising questions about aligning its objectives with human values and priorities. Timeline-wise, no one knows, and everyone makes their prediction, so time will tell who was right.
Source: TechCrunch
Add your own comment

21 Comments on NVIDIA CEO Jensen Huang: AGI Within Five Years, AI Hallucinations are Solvable

#1
Onasi
Sure, and fusion power is 10 years away, the cure for cancer is just around the corner, self driving cars will soon be on every road, Mars colonies by 2035 and other cool sci-fi tech-bro ideas that are used for buzzwords and are inevitably not coming.
Like, I am not denying advances that are made, but what we have now can really be called AI in the loosest sense. Going from here to a full blown AGI in just 5 years is absolutely implausible.
Posted on Reply
#2
Lycanwolfen
So Skynet is going to kill us all then. I mean how many movies and books do you need to read or watch before you stop the A.I.
Posted on Reply
#3
Readlight
Let's chip cat's and make them smarter.
ReadlightLet's chip cat's and make them smarter.
And give WiFi connectivity to remotely control them.
Posted on Reply
#4
Dimitriman
I am 100% convinced now that Jensen is part of the Accelerationist group of billionaires. They think their billions will overcome any problems sentient AI could bring.
Posted on Reply
#5
Denver
DimitrimanI am 100% convinced now that Jensen is part of the Accelerationist group of billionaires. They think their billions will overcome any problems sentient AI could bring.
He's just a CEO trying to convince you that he's selling the solution, not the problem.
Posted on Reply
#6
Romoredux
If someone could do U.I. today, I'd do it. No questiosn asked, sign me up.
Posted on Reply
#7
pavle
Trust Jen Hsun to first bring us an automatic idiot, then the generalised idiot. That's just what this earth needs. Whoopty doo!
Posted on Reply
#8
ThrashZone
His solution emphasizes the importance of well-researched and accurate data feeding into AI systems to mitigate these occurrences
Hi,
Yeah they said fact checker was reliable to and turned out to be preprogrammed bias "pick a word or phrase" bs more than the entire context so even "mostly false" wouldn't apply to it's obvious twisting of what was said hehe
AI will just be more of the same just more long winded.
Posted on Reply
#10
user556
Mitigated is not solved! It's just less often/severe. And I'll believe even that when I see it.
Posted on Reply
#11
MacZ
Even if you mitigate hallucinations of LLMs by forcing it to check humans sources, it will still require that humans check the result of the inference when the AI produces something (like code).

And all AI is not LLM.

You can be sure that AI will be used for military applications (as they are starting to be used in Ukraine for example).

Then hallucinations of AI will have a very different meaning.
Posted on Reply
#12
wolf
Performance Enthusiast
Remind me in 5 years, I really wonder if this person far more in the know can accurately predict better than the swaths of haters who would relish in seeing him be wrong. Naturally he has a vested interest in it, because of the financial incentive of the hardware that could power it, but that isn't strictly at odds with making the prediction.
Posted on Reply
#13
user556
MacZYou can be sure that AI will be used for military applications (as they are starting to be used in Ukraine for example).

Then hallucinations of AI will have a very different meaning.
That's demonstrably called genocide ... somewhere else. Pretty simple when everything, including the press and medical staff, are considered targets.
Posted on Reply
#14
R-T-B
The Terrible PuddleAnd how is he going to solve the problem of AI poisoning?
Sounds like he wants to curate his data.

That is actually reasonable. Training off the internet was always going to result in an AI that is as dumb as a cat meme.
Posted on Reply
#15
MacZ
wolfRemind me in 5 years, I really wonder if this person far more in the know can accurately predict better than the swaths of haters who would relish in seeing him be wrong. Naturally he has a vested interest in it, because of the financial incentive of the hardware that could power it, but that isn't strictly at odds with making the prediction.
I'm quite enthusiastic about IA and machine learning.

But, the way we do IA now is not a new tech. It has been this way for quite some time, for example for OCR or speech recognition. What has changed is the miniaturization of silicon that has allowed much better performances and results.

And the way it is done, it is really hard to get a perfect 100% result. Since you can't train a model with every possible input, you can't predict every possible output. Even if you could train a model with every possible input, each successive input or feedback would also erase a bit of previous inputs.

This problem is with us since a very long time and I don't see how it will be adressed in the future, when the models will obviously become more and more complex. If we understood how the neuron layers really worked, we wouldn't need to train them : we would just build them directly. Now we train models with a finite set of inputs and expect that when confronted to other inputs, they will give sensible results. But since they are black boxes in essence, you may get weird results. What Jensen Huang is advocating is postprocessing, but that may not be applicable to every implementation of IA. Not everything is LLM.

Maybe AGI will help, if it really arrives in five years, but I feel that, like ours, Artificial General Intelligence will be flawed.
Posted on Reply
#16
LazyGamer
My sister who works as proctologist says AGI will be achieved in this half of the century.
Posted on Reply
#17
nguyen
Man let hope AGI won't become like Ultron where 5 mins look into the internet and decided humanity should be culled.
Posted on Reply
#18
Onasi
nguyenMan let hope AGI won't become like Ultron where 5 mins look into the internet and decided humanity should be culled.
Hell, I am a human and I frequently think that humanity is a failed project and should be culled. Not an unreasonable conclusion to make, honestly.
Posted on Reply
#19
remixedcat
ReadlightLet's chip cat's and make them smarter.


And give WiFi connectivity to remotely control them.
poor cats
Posted on Reply
#20
Chomiq
All of that is just the ground work for switching over to subscription model for enterprise.
Posted on Reply
#21
ypsylon
If AGI like X-franchise then let's stop it now before it starts killing every organic lifeform in the Universe.:eek:

Last thing we need is Xenon to murders us all, while we do not have ATF (AGI Task Force)/Terran Protectorate naval assets to deal with this threat.
Posted on Reply
Add your own comment
May 5th, 2024 21:59 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts