• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 40 Series "AD104" Could Match RTX 3090 Ti Performance

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,022 (1.07/day)
NVIDIA's upcoming GeForce RTX 40 series Ada Lovelace graphics card lineup is slowly shaping up to be a significant performance uplift compared to the previous generation. Today, according to a well-known hardware leaker kopite7kimi, we are speculating that a mid-range AD104 SKU could match the performance of the last-generation flagship GeForce RTX 3090 Ti graphics card. The full AD104 SKU is set to feature 7680 FP32 CUDA cores, paired with 12 GB of 21 Gbps GDDR6X memory running on a 192-bit bus. Coming with a large TGP of 400 Watts, it should have a performance of the GA102-350-A1 SKU found in GeForce RTX 3090 Ti.

Regarding naming this complete AD104 SKU, it should end up as a GeForce RTX 4070 Ti model. Of course, we must wait and see what NVIDIA decides to do with the lineup and what the final models will look like.


View at TechPowerUp Main Site | Source
 
Disappointed if true, AD104 should beat GA102 by 30% like what happened with Pascal
perfrel_3840_2160.png
 
Last edited:
So, the new 400W model matches the old 450W model? Considering they moved from the "bad" 8nm Samsung node to the "great" 5nm TSMC node, it's not exactly a breathtaking result.
I believe either performance is higher or wattage much lower. 50W less for the same performance looks too little to me.
 
So, the new 400W model matches the old 450W model? Considering they moved from the "bad" 8nm Samsung node to the "great" 5nm TSMC node, it's not exactly a breathtaking result.
I believe either performance is higher or wattage much lower. 50W less for the same performance looks too little to me.
Also, less VRAM.

Pretty underwhelming. Maybe the Samsung node was not as bad as we had believed.
 
Last edited:
yeah that is preeetty weak if true
 
Disappointed if true, AD104 should beat GA102 by 30% like what happened with Pascal
View attachment 256616
AD104 is a very small chip.

So, the new 400W model matches the old 450W model? Considering they moved from the "bad" 8nm Samsung node to the "great" 5nm TSMC node, it's not exactly a breathtaking result.
I believe either performance is higher or wattage much lower. 50W less for the same performance looks too little to me.
Or more likely the chip is pushed way beyond the optimal effiiciency curve. From the same leaker AD102 is supposed to bring 70-80% more performance at 450W and that's a big jump in efficiency.
 
Low quality post by fancucker
Nvidia will still be the more compelling option (more comprehensive forward looking package like RTX + DLSS). Updating your power supply is a small price to pay for greater quality. The gimmicky chiplet package on Navi 31 will only introduce latency and frame-rate consistency issues.
 
Disappointed if true, AD104 should beat GA102 by 30% like what happened with Pascal
View attachment 256616
I argued with people on Pascal launch who said 1080 was not enough of a step forward from their 980 Ti's and that 980 Ti overclocks to 1500Mhz easily etc. As if 1080 could not OC to 2000Mhz+
Nvidia will still be the more compelling option (more comprehensive forward looking package like RTX + DLSS). Updating your power supply is a small price to pay for greater quality. The gimmicky chiplet package on Navi 31 will only introduce latency and frame-rate consistency issues.
"comprehensive package" & "greater quality" :wtf:
Also im glad you know already how N31 will perform and what issues (if any) it will have.

We normal people will wait for reviews and prices before deciding. Not blindly buying from company N.
 
Also, less VRAM.

Pretty underwhelming. Maybe the Samsung node was not as bad as we had believed.
The Samsung node was good its the crap gddr6x that hogged power... plus if nvidia used Samsung 7lpe they would have reached 2500ghz on a 3090ti...
 
I argued with people on Pascal launch who said 1080 was not enough of a step forward from their 980 Ti's and that 980 Ti overclocks to 1500Mhz easily etc. As if 1080 could not OC to 2000Mhz+

"comprehensive package" & "greater quality" :wtf:
Also im glad you know already how N31 will perform and what issues (if any) it will have.

We normal people will wait for reviews and prices before deciding. Not blindly buying from company N.
980Ti had way more OC potential though. Stock the 1070 was 10-15% faster but at 1500mhz it was on par with a 2100mhz 1070.
 
With electricity costs going up in the whole world, how come the 4070Ti draws more than double of power compared to a 1080 (180W)? New generations should bring more performance at the same power. We are getting expensive space heaters instead.
Add to that 400W heating in your room the costs of air conditioning and it's becoming a quite expensive hobby.
On a second thought, the RTX 3070Ti has awful performance per watt, especially compared to the RTX 3070 and RX6800, so there's hope my intended upgrade RTX 4070 will perform close to the Ti version but at much lower power draw.
 
I'm not even looking at the performance. If that mid-range cad uses between 400-500w, then I will not be touching this Generation even if the flagship is like 200% faster than 3090ti. I will not survive the summer with a 500w or higher power consuming card in my system as my 350w 3080ti is already pumping a crap tone of heat into my room.
 
Nvidia will still be the more compelling option (more comprehensive forward looking package like RTX + DLSS). Updating your power supply is a small price to pay for greater quality. The gimmicky chiplet package on Navi 31 will only introduce latency and frame-rate consistency issues.
As always pricing and performance will be the deciding factor. DLSS is less of a parameter today with FSR 1.0 and 2.0. While DLSS is better in most cases having better implementation, also being in the market for much longer, meaning it is more tweaked and at more games, FSR 2.0 does the job nicely in most cases. Even FSR 1.0 will be enough for people just trying to go above a certain framerate. RTX is something that AMD also offers. And while Nvidia is still ahead there too, raytracing is not something essential to enjoy a game. I understand that pointing at Nvidia's exclusives does show them to have an advantage, but it's not exactly like the others don't offer those too today. Also we will have to wait and see how much AMD has improve their Ray Tracing performance in RDNA3. Nvidia will also offer higher RayTracing performance, but again, at what price points? If it starts with 4080 and 4090, we are talking about 4 digit pricing. I doubt Nvidia will sell even 4080 for less than $1000.
As for Navi 31 and latency and frame rate consistency and stuff, this is NOT CrossFire. Don't have high hopes there that Navi will be a disaster. Just wait and see.
And no, upgrading the power supply is NOT a small price to pay. Because is not just the power supply that could be an over $100 expense. It's also power consumption that will be a cost that will keep adding up for every hour of gaming.
 
I argued with people on Pascal launch who said 1080 was not enough of a step forward from their 980 Ti's and that 980 Ti overclocks to 1500Mhz easily etc. As if 1080 could not OC to 2000Mhz+

Well i had Titan X Maxwell too and i didn't buy the gtx1080, simply because everyone knows 1080ti would be much faster
 
This rumour is in line with all the rumours and leaks so far but the real question is "at what cost"?

It's going to have as many transistors as GA102, it's going to use as much power as GA102, and it's (knowing Nvidia) not going to be cheap either. The only plus side is that TSMC5 is a denser process node, which should reduce manufacturing cost, but we all know Nvidia chose Samsung 8nm for Ampere because TSMC7 wouldn't budge on cost.
 
I'm not even looking at the performance. If that mid-range cad uses between 400-500w, then I will not be touching this Generation even if the flagship is like 200% faster than 3090ti. I will not survive the summer with a 500w or higher power consuming card in my system as my 350w 3080ti is already pumping a crap tone of heat into my room.

Same here, I draw the line around 220-230w for any card I'm willing to buy/put in my PC.
Starting from this August our electricity bill will cost double than what it used to, even my low power 12100F+undervolted GTX 1070 system will cost me around 10$/month with my casual use case. 'barely 1-3 hours/day gaming rest is light use'
So yeah this kind of power draw is a big nope for me, most likely I will just upgrade to a 3060 Ti/6700 XT and be done with it.
 
xx70 class (104 die) using 400W? That's pure insanity. Does Nvidia think all it's users live in Iceland, where weather is cold and electricity bill nearly non existent? Here in mainland Europe electricity expenses went up 200 to 300% since Putin's adventure in Ukraine plus record breaking temperatures scorching us outside. I still remember heated debates over 1080TI (102 die) power consumption (267W GPU under stress test) 5 years back now we're talking about double wattage for 104 die. Anything above 250W for 70 class is way too much. Nvidia lost it's way imho. I'd not be touching this thing even if Nvidia paid me to use it. It's already too hot in here with 200W GPU.

GIF by Camosun
 
Last edited:
There is clearly a barrier neither Nvidia or AMD can pass through, they cannot increase performance in a meaningful way in the 2 year cycle without going crazy on power draw, no point in beating the dead horse. Skip this generation if you don't agree with the way things are going or deal with it.
People want big performance leaps and 4k 200Hz with the 2 year cycle with the same power draw and they are being as unrealistic as Nvidia and AMD

Set a wattage limit, stay in the limit no matter what they release, or shut up about it.
 
I argued with people on Pascal launch who said 1080 was not enough of a step forward from their 980 Ti's and that 980 Ti overclocks to 1500Mhz easily etc. As if 1080 could not OC to 2000Mhz+

"comprehensive package" & "greater quality" :wtf:
Also im glad you know already how N31 will perform and what issues (if any) it will have.

We normal people will wait for reviews and prices before deciding. Not blindly buying from company N.

I like how he declares chiplets to be "gimmicky", how can you call something a "gimmick" when literally the entire industry is moving in that direction?

I seriously do not understand those that cheer for Nvidia or Intel... In terms of pure self-interest and what's more advantageous for the consumer, everyone should be cheering for AMD. The better AMD does against Intel and Nvidia, the more likely we get larger performance increases between generations, the more likely prices go down, the more likely innovation is pushed further, faster.

We all remember what the CPU market was prior to ryzen, right? 4% generational increases, 4 core stagnation, and all at a high price...alder lake and raptor lake would not exist without Ryzen.

And let's look at the GPU market, without RDNA2 mounting such a fierce competition, there's no doubt Nvidia's cards would be more expensive than they already are... (BTW, AMD is able to compete with Nvidia while having less than half the R&D budget, $5.26 billion vs $2 billion and AMD has to divide that $2 billion between graphics and x86 and x86 being the larger, more lucrative market, it must get the majority of those funds). And look at the latest Nvidia generation to be released, all the rumors of huge power consumption increases are evidence that Nvidia is doing everything in its power to push performance and all due to RDNA3.

I'm not saying everyone should be AMD fanboys, but don't the people who cheer on Intel and Nvidia realize that, at least until AMD has gotten closer to 50% market share in dGPU and x86 (especially enterprise and mobility, the two most lucrative x86 segments), victories for Intel and Nvidia inherently equate to losses for consumers? That these people who wish for AMD failure would have us plunged into a dark age even worse than the pre-ryzen days in both x86 and graphics... Sorry for the off topic rant, but I just don't get it when people are cheering for Nvidia prior to the products even being released, and by extension, cheering for reduced competition in the market... I guess the desire to create a parasocial relationship with whatever brand they deem to most likely be the winner is stronger than supporting what's best for your own material self-interest.
 
There is clearly a barrier neither Nvidia or AMD can pass through, they cannot increase performance in a meaningful way in the 2 year cycle without going crazy on power draw, no point in beating the dead horse. Skip this generation if you don't agree with the way things are going or deal with it.
People want big performance leaps and 4k 200Hz with the 2 year cycle with the same power draw and they are being as unrealistic as Nvidia and AMD

Set a wattage limit, stay in the limit no matter what they release, or shut up about it.
Why the F should we shut up about it? It's not like we end consumers have much say anyway in this monopoly/duopoly market. The only thing we can do, besides not buying, is to scream our opinion pointing out that development is moving into the wrong direction. Maybe, just maybe someone at Radeon and Nvidia will hear us.
 
Last edited:
development is moving into the wrong direction
Development is moving in the expected direction when people cheering for the model A that beats model B while consuming +40-50-100% more power. AMD was targeting efficiency and people kept cheering for the 250W Alder Lake and the 450W RTX 3090 Ti.
 
Last edited:
perf_oc.png
perf_oc.png


GTX 1080 was 25% faster when comparing OC versions and 20% for the OC-OC. And it did that with 10% less transistors 7200/8000. 600 shrinking to 300mm2. and the impressive 59% improvement between FE and OC-OC.

Now 4070 Ti has more transistors, L2 and ROPs, But cut to 192 bit bus and using same memory speed G6X. no improvement there. Bandwidth is cut to less than half.
 
Last edited:
I could be wrong but isn’t CUDA core count calculated by adding FP32 cores (7680) to FP64 cores (3840) for a total of 11,520 cores. That’s 10% higher cores than the 3090Ti at 12% lower power. I mean it’s not great but it’s not bad either.

I remember all the rumors not knowing how to count CUDA cores before the 3000 series launch. Looks like that might be the case again.
 
Disappointed if true, AD104 should beat GA102 by 30% like what happened with Pascal
View attachment 256616
Well, Is this a rule just because it happened once?

The only thing I know is that the next generation will be more expensive and it will take a long time to get entry-level and mid-range cards, stores are suffering from overstock of GPUs...
 
Back
Top