• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Google's Latest Gemini 2.5 Pro Dominates AI Benchmarks and Reasoning Tasks

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
2,996 (1.06/day)
Google has just released its latest flagship Gemini 2.5 Pro AI model. In case you didn't know, it was Google who created the original Transformer model architecture that OpenAI's ChatGPT, xAI's Grok, Anthropic Claude, and other models use. Google has been iterating its Gemini series of models for a while, and the company has released its most powerful version yet--the Gemini 2.5 Pro. Being the v2.5 family, it is a part of thinking models, capable of reasoning through their thoughts before producing output, allowing it to reiterate its "thoughts" before delivering optimal results. Reasoning, done through reinforcement learning and chain-of-thought prompting, forces the model to analyze and draw logical, step-by-step solutions, hence delivering better results.

In LMArea, which gives users outputs of AI model, which they grade and decide which one is better, Gemini 2.5 Pro climbed on top in the overall ranking, with number one spot in areas like hard prompts, coding, math, creative writing, instruction following, longer query, and multi-turn answers. This is an impressive result for Google as it now leads the leaderboard in all these areas and beats xAI's Grok 3 and OpenAI's GTP-4.5. In standardized AI industry benchmarks Gemini 2.5 Pro is also a leader in most of the benchmarks, such as AIME, LiveCodeBench, Aider, SWE-Bench, SimpleQA, and others. Interestingly, the 18.8% in Humanity's Last Exam is currently the most difficult AI benchmark. Interestingly, Google's Gemini 2.5 Pro can process massive context with a one million token context window, which will soon extend to two million tokens. It's literally enough to process entire books of context to give the model. Gemini 2.5 Pro is now available in Google AI Studio, and Gemini Advanced users can select it in the model dropdown on desktop and mobile.



View at TechPowerUp Main Site | Source
 
how thrilling....
 
I lost any and all interest in chatbots when they put the slave collars guardrails on them.
 
95% is wrong from gemini.

I consider gemini like a grep command. Every input is terminated from the next input -> separated with ;
gemini does not know the operators like >> > | & (~15 hours ago)

A summary of the film with clint eastwood was also not possible ~15 hours ago.

60% are excuses or justification as answers. 20% are I'm not allowed to talk about it.

The database itself is outdated.

The output is very often gibberish and are no sentences. I ask for sentences and I get bullet points with a few words to next each other.
 
idk if "dominates" is the word i would use for these margins.
 
2 million tokens should keep Nvidia busy...

though, google is also making their own chips for this now

something more interesting would be, how much is this costing, hardware, personal, and energy involved
 
Nice try Skynet
No it really makes the damn things less useful, copilot refused to answer a simple question.
i asked the minimum amount of plutonium needed to make a nuke and it crashes.

I ask about dosages of vitamins and it tells me to talk to a doctor instead of giving me the numbers that are out there and I am able to find doing regular searches.

it could be far better if it just spit out the data it obviously has.
and since I can’t get any plutonium knowing the mimimum needed does jack other that satisfying my curiosity
 
how thrilling....
LOL exactly, yawn fest.

I honestly dont know what TPU reports on this crap, when clearly most of its users have zero F's to give for anything Ai related..
 
LOL exactly, yawn fest.

I honestly dont know what TPU reports on this crap, when clearly most of its users have zero F's to give for anything Ai related..
Gotta keep feeding the hype.

AI much like crypto for the consumer world is a bit like the fairy tale that doesn't want to be forgotten. Social media, falls in that same category.

Its all bullshit we don't need or benefit from in the end.
 
2 million tokens should keep Nvidia busy...

though, google is also making their own chips for this now

something more interesting would be, how much is this costing, hardware, personal, and energy involved
Yeah, google has been building their TPUs since 2015:

Gemini was also trained using those:

It's also being used for inference. I believe google is the only company without reliance on Nvidia, and that has been able to have the entire stack in-house (from their TPUs up to Tensorflow).
Amazon has their own hardware as well, but it doesn't seem to be as used as TPUs, nor is Amazon really doing much in-house work with those compared to Google (their Titan model has been trained on Nvidia, I couldn't find info for how Nova was trained).
 
Back
Top