• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Google's Gemini AI Wins a Gold Medal at International Math Olympiad

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,267 (1.12/day)
The inflection point for difficult problem-solving using AI has arrived as Google's best AI model, Gemini with Deep Thinking enabled, has officially won a gold medal at the International Math Olympiad (IMO). Solving some of the world's toughest math challenges involves reasoning and logic that often require unique solutions and creative approaches. Google's DeepMind team has officially verified these results, and IMO President Prof. Dr. Gregor Dolinar noted that "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points—a gold medal score. Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow."

While this may seem like a "computers are good at math" moment, it is fundamentally different. The Gemini model used here employs end-to-end natural processing, utilizing text inputs from problem descriptions to generate mathematical proofs in a manner that IMO graders found clear and precise. Google's performance this year was due to an upgraded version of Gemini Deep Think, an enhanced reasoning layer designed to tackle complex questions. The design integrates the company's latest research, including parallel thinking, allowing the model to explore and synthesize multiple solution paths simultaneously before committing to a final answer, thereby moving beyond a single linear chain of reasoning. All of this is proof that AI's reasoning is slowly advancing into self-sufficient data processing, which can be achieved through a multistep, layered approach. Google's Gemini officially completed all of this within a 4.5-hour time window, qualifying it for an achievement.



Exact compute costs are unknown, but running a model for 4.5 hours can be quite expensive, especially at the multi-trillion parameter size of the highest-end models on Google TPUs, with test time scaling enabled. Google will soon provide its Deep Think model to Gemini Ultra subscribers, which allows for higher usage rate limits for $249.99/month.

View at TechPowerUp Main Site | Source
 
But, it can't re-order my regular mobile Starbucks order for me. Come on Google, It's like you want me to die on my drive to work.
 
Finally some AI progress not based on LLMs.
 
Yet still tells me city X is bigger than city Y because X has 100k people and Y has 300k people. It has no actual concept of anything, not even what constitutes as "bigger" in most kindergarten metric (bigger number is bigger). And Gemini is full of such "gems".
 
a calculator wins a math tournament....man im stunned....
" Google's DeepMind team has officially verified these results"
Well if they verified it then im sure its trustworthy
 
Using machines of increasing complexity to help solving mathematical problems of increasing complexity was how we got current computers, so if newer algorithms help in that respect then its just one more step in right direction.
 
Those guys are either paid, or plain old morons.
Why on earth would you award a medal to a machine in a competition not only meant for humans but considers the use of calculators or even a damned book cheating?
 
Yet still tells me city X is bigger than city Y because X has 100k people and Y has 300k people. It has no actual concept of anything, not even what constitutes as "bigger" in most kindergarten metric (bigger number is bigger). And Gemini is full of such "gems".
 
Perhaps they should have a tournament for trying to reason people out of the holes they've reasoned themselves into, like poor ol' Geoff Lewis...
 
Does this mean we can expect Gemini to finally set reminders or do other regular tasks that Google Assistant does with ease?
 
Back
Top