• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Tunes GeForce RTX 5080 GDDR7 Memory to 32 Gbps, RTX 5070 Launches at CES

It's all DOA except 5090 that will sell like hot cakes. All the rest are refreshes with 10-20% improvement. 5080 with only 5% more Cuda than 4080 Super is slower than 4090 for sure.

4080 has 10% less CUDA cores than 3090Ti, yet beating 3090Ti by 20% at 4K
relative-performance_3840-2160.png
 
Dear Leader Jensen needs more leader jackets… :D

.
IMG_0261.png


Source:

 
Anybody get the feeling the 5080 will be slower than 4090 in terms of raw performance but has better Ai processing cores to generate frames and pretty much everyone said already the prices are gonna suck
 
it's not even AI, it is BS that NV and lots of other companies are milking and making billions from, it is however not AI in the slightest or anywhere close to, it's a fuckin ponzi scheme, another dotcom bubble bollocks and it will bust, taking a lot of data and having the means to look at it and interpret so that it looks like intelligence it is not artificial intelligence, it's computing, it's faster computing than how we have been doing it up until now, but there's no intelligence involved, you can program it just the same as any other type of computer models to come up with any result you want it to, that's not artificial intelligence, it's nvidia/others realising they can do tasks on GPU's/LLM/"AI" thousands of times faster than you can on traditional CPU and telling you it's AI, we are so far away from true AI and I highly doubt it will even come to pass in our generations, the industry gets hyped up over some new technological BS every few years that is going to change everything and doesn't, just gets a select few very very rich and the rest left in the dirt, rinse and repeat
Its really distributed (computing) acceleration in the cloud, not very much unlike what we've been Folding@Home for years. I think the progress here is the amount of and tweakability of the models they use, its just that the consumer facing models are pretty weak and a larger model is very costly to run. For business though, there's a lot to be gained - potentially. But yeah, its problems are clear. The rapidly expanding data and power hunger of this tech isn't quite suitable for our day and age. Its quite similar to how we're gaming now, on grossly inefficient engines doing more realtime processing to gain a tiny visual advantage over much more efficient, older approaches to render an image. There's a bit more detail, but the price of it is ridiculous. Ironically, to make it work better, we take a much lower quality source image to render and then juice it up with this new technology; the net result once again offering a tiny advantage over ye olde methods. Its a two steps forward one step back affair really. There's some movement forward though, if you're willing to see it.

I think for gaming the major problem isn't so much AI or its presence or Nvidia pushing it, but rather the overall market conditions and the lacking progress of competitors. Those are factors that can and will likely change. Markets don't just stop working; its like the economy, going up and down in effectiveness.

Anybody get the feeling the 5080 will be slower than 4090 in terms of raw performance but has better Ai processing cores to generate frames and pretty much everyone said already the prices are gonna suck
It will be slower in some games and faster in another handful of selected titles so Nvidia can maintain they have a 4090 killer at a slightly lower price, so they can win over those who didn't jump on the x90 last time. This, to me, is simply obvious; we know how Nvidia rolls at this point. Which also tells us a lot of the actual changes in shaders; I think its going to be clocked higher and architecturally, not much is changed. The improved clocking will make the difference - except where it can't clock higher because the game/app just wants all power it can get out of it. Its the perfect gray area for Nvidia to sell this on.
 
Last edited:
Why is this article so positive? 12GB for an $800 card in 2025 is nothing to celebrate, neither is 16GB on an $1000+ card. How anyone can put a positive spin on this is beyond me.

16GB is the minimum for a mid to high end card in 2025, and 16GB in a top of the range consumer card is ludicrous. There are games already using more than that now, so how will this fare for another 2 years in a customer's PC?
 
Why is this article so positive? 12GB for an $800 card in 2025 is nothing to celebrate, neither is 16GB on an $1000+ card. How anyone can put a positive spin on this is beyond me.

16GB is the minimum for a mid to high end card in 2025, and 16GB in a top of the range consumer card is ludicrous. There are games already using more than that now, so how will this fare for another 2 years in a customer's PC?
high prices wont change for a fact... its all about smart spending your money... depending on what you play, settings, resolution and current gpu... decides whether you should upgrade or not
i think folks with 10 series should upgrade... 20 series can consider depending on the circumstances... and i think 30 series especially x70 x80 x90 can hold off and 40 series... well just stay put
 
New generation GPUs at these asking prices, should have 5-7 year warranties.
 
Where I live we are already seeing Laptops for $4-7000 Canadian. Or even higher. Now we have MBs for over $700 with 1 PCIe slot when the separation between MB pricing is supposed to be flexibility. The 5090 I expect to push as high as $5000. As crazy as that sounds the most expensive 4090 where I live is just shy of $4000. I hope they know what they are doing. Where I live a 1 bedroom apartment is about $2000 a month but they want to sell these to to young affluent Gamers and people with more money than they need or people that think Nvidia is good enough to use the cost of building 3 capable Gaming PCs to get the GPU.

The others cards are so meh that the 5080 for $2000 will not sell well. Ray tracing is nice but not worth the cost of 2 cards.

The 5070 leaves room for the 5070s Super or TI but those will be expensive too.

I hope the performance justifies the price. Of course for me the real want is raster performance. 32GB of DDR7 sounds good but not if it costs more than some used cars.
 
No, the joke is that the 5070 has 6,400 Shaders, my 3070Ti has 6144.

This thing will only have what, maybe 15% more performance than mine with the new memory?
Considering 4070 has 15%~ more performance than 3070 Ti with less shaders and memory bandwidth, 5070 will obviously be even faster than that. You can't compare shader counts cross generationally
 
Last edited:
You can't compare shader counts cross generationally
Yes you can, it's one of the most reliable indicators of performance, the difference in shaders between 4070 and 3070ti is very small except the 4070 has a massive increase in cache, that's why it's faster. If it weren't for that the difference between the two would be close to none, GPU cores don't get that much faster between generations, there is not much to optimize.
 
Anybody get the feeling the 5080 will be slower than 4090 in terms of raw performance but has better Ai processing cores to generate frames and pretty much everyone said already the prices are gonna suck
Which is more important...hmm... trend, rt-pt...or ai?
 
Why is this article so positive? 12GB for an $800 card in 2025 is nothing to celebrate, neither is 16GB on an $1000+ card. How anyone can put a positive spin on this is beyond me.

16GB is the minimum for a mid to high end card in 2025, and 16GB in a top of the range consumer card is ludicrous. There are games already using more than that now, so how will this fare for another 2 years in a customer's PC?
Sadly, its the new crop of consumers, led by bribed/biased influencers.
 
4080 has 10% less CUDA cores than 3090Ti, yet beating 3090Ti by 20% at 4K
View attachment 366901

Average clock speed for a 3090ti founders edition is 1999mhz, average for 4080/4080s is 2715, thats a 36% clock speed advantage while having nearly identical specs. The 5080 will absolutely be slower than a 4090 if it releases with ~10700 CUDA cores.

It’s not just gonna “magic” itself faster without huge IPC gains.
 
Let's see how much faster 12VHPW connectors melt with the new cards.
 
Yes you can, it's one of the most reliable indicators of performance, the difference in shaders between 4070 and 3070ti is very small except the 4070 has a massive increase in cache, that's why it's faster. If it weren't for that the difference between the two would be close to none, GPU cores don't get that much faster between generations, there is not much to optimize.

Correct and it has faster memory. People seem to forget these things, guess that is why the leather jacket man keeps getting away with his bs.
 
Yikes, 12 GB for a 5070? I have 12GB on my 6700XT and that is enough at 1440p because the hardware is well-tuned for this resolution, I can't imagine how crippled the 5070 will be on 12 GB of VRAM.
 
Average clock speed for a 3090ti founders edition is 1999mhz, average for 4080/4080s is 2715, thats a 36% clock speed advantage while having nearly identical specs. The 5080 will absolutely be slower than a 4090 if it releases with ~10700 CUDA cores.

It’s not just gonna “magic” itself faster without huge IPC gains.
I do see another clock speed boost given its an extra 80w TDP. That 80w is going to go somewhere, whether that is enough for it to get anywhere near the 4090 though remains to be seen. I am expecting 20-30% gain on 4080 raw, but more via some DLSS/RT enhancement tied to the 5000 series.
 
I do see another clock speed boost given its an extra 80w TDP. That 80w is going to go somewhere, whether that is enough for it to get anywhere near the 4090 though remains to be seen. I am expecting 20-30% gain on 4080 raw, but more via some DLSS/RT enhancement tied to the 5000 series.

Same process node, pushing clocks is going to land more on the side of diminishing returns when it comes to power; we don’t know whats being done with cache or other parts of the die so where power is being utilized is up in the air.

I think people are being way too optimistic given the general specs and the current trend over the past 4 years. I don’t see IPC and clock speed advances covering a 60% CUDA core gap.
 
Yet another generation of Nvidia GPUs, and as usual pretty much the entire discussion is about people complaining about memory size and bus width, and not a single word about how this may enable more immersive and exciting games :rolleyes:. And as always, people make arbitrary guesses about how much memory a certain tier of a GPU needs, especially without know anything about the performance characteristics of this upcoming generation. It's the same sad song every time, yet Nvidia have continued to dominate the upper mid-range and high-end segments, offering solid products which have offered remarkable longevity.

As this needs to be said every single time; allocated VRAM isn't the same as needed VRAM.
And don't compare VRAM sizes across GPU generations or vendors, just like with cache, comparing it without context makes no sense.

Whether Nvidia has done the right choice will be very obvious in reviews; when GPUs run out of VRAM things go bad quickly. But if they continues to scale with high resolutions/details, then VRAM is not the bottleneck, despite what anecdotes reviewers/opinionators might pull out of thin air.
(But like with everything these days, opinions and feelings are more important than facts…)
 
Yet another generation of Nvidia GPUs, and as usual pretty much the entire discussion is about people complaining about memory size and bus width, and not a single word about how this may enable more immersive and exciting games :rolleyes:. And as always, people make arbitrary guesses about how much memory a certain tier of a GPU needs, especially without know anything about the performance characteristics of this upcoming generation. It's the same sad song every time, yet Nvidia have continued to dominate the upper mid-range and high-end segments, offering solid products which have offered remarkable longevity.

As this needs to be said every single time; allocated VRAM isn't the same as needed VRAM.
And don't compare VRAM sizes across GPU generations or vendors, just like with cache, comparing it without context makes no sense.

Whether Nvidia has done the right choice will be very obvious in reviews; when GPUs run out of VRAM things go bad quickly. But if they continues to scale with high resolutions/details, then VRAM is not the bottleneck, despite what anecdotes reviewers/opinionators might pull out of thin air.
(But like with everything these days, opinions and feelings are more important than facts…)
I take it you haven't tried to play too many modern games with RT at 4K? 16GB of VMEM is barely enough and still causes the game to swap out to main memory at times, causing stuttering. I can play CyberPunk2077 and max out a 16GB card very easily.

The main point that seems to escape you though, is one of longevity. We probably will have to wait 2 years with these cards, so do you think games in 2 years time will still work well with a 12GB frame buffer, or even a 16GB buffer? No, they won't, and NV will save the day with launching a 5080 Super, which is the real 5080, and it will have 24GB of VMEM, then the 5070ti will come with 16GB... making anyone who spent $700+ on a 5070, which is already really a 60-class card, looking pretty stupid for wasting their money, and stuck with games presenting as a stuttering mess within a year of owning the card.

There is no way of looking at this $1000+ 5080 as offering any kind of value in 2025. It's even looking unlikely to match the over 2-year-old 4090, let alone outperform it.

And regarding your comment on reviews - No, I don't trust people that will literally get given $3000+ worth of cards to keep so they can "review" them. Yes, there will be lots of pretend whining, sarcasm and outrage about the price to perf ratios, and how NV is greedy and out of their mind blah blah, but that won't stop them from spending the next 2 years reviewing every motherboard, CPU and game using a free $2000+ card.
 
Last edited:
Yikes, 12 GB for a 5070? I have 12GB on my 6700XT and that is enough at 1440p because the hardware is well-tuned for this resolution, I can't imagine how crippled the 5070 will be on 12 GB of VRAM.
And here we have the problem, people that can't look past the quantity of ram, and look into the speed and throughput of that ram.
 
And here we have the problem, people that can't look past the quantity of ram, and look into the speed and throughput of that ram.
Because the speed and throughput of the VRAM is meaningless when it has run out... :kookoo:

Please understand that your use case, as well as your definition of value is maybe not the same as other peoples. I personally see very little value in spending $1000+ on a 16GB card in 2025... That's my opinion, based on MY use case.

On a related side-note, it seems the best value NV card next year is going to be the 4070ti, but we all know that NV will cancel that card ASAP.
 
Last edited:
Yet another generation of Nvidia GPUs, and as usual pretty much the entire discussion is about people complaining about memory size and bus width, and not a single word about how this may enable more immersive and exciting games :rolleyes:. And as always, people make arbitrary guesses about how much memory a certain tier of a GPU needs, especially without know anything about the performance characteristics of this upcoming generation. It's the same sad song every time, yet Nvidia have continued to dominate the upper mid-range and high-end segments, offering solid products which have offered remarkable longevity.

As this needs to be said every single time; allocated VRAM isn't the same as needed VRAM.
And don't compare VRAM sizes across GPU generations or vendors, just like with cache, comparing it without context makes no sense.

Whether Nvidia has done the right choice will be very obvious in reviews; when GPUs run out of VRAM things go bad quickly. But if they continues to scale with high resolutions/details, then VRAM is not the bottleneck, despite what anecdotes reviewers/opinionators might pull out of thin air.
(But like with everything these days, opinions and feelings are more important than facts…)
It's about what you get for what you pay. This is the second time when Ngreedia tries to sell us lower-specified product with a sticker of "premium/high performance product". Same things as with two versions of RTX 4080 before. It's more like: How dare they? 12 GB VRAM for $600-700 GPU in 2024 is ridiculous, a ripoff. Of course, Nvidia does this on purpose so they can release 2 another versions of same card few months later. The RTX 4080 Super is fail among fails, that card is not even worth printing the boxes it's stored in.

And here we have the problem, people that can't look past the quantity of ram, and look into the speed and throughput of that ram.
DLSS and RT and similar stuff occupies noticeable space of VRAM for it's own caching purposes. VRAM is not only for textures. Yes, faster memory has higher bandwidth so it can make up for time lost with loading stuff into the slower memory. Having more VRAM means that sometimes there's no need for so many loadings and that enables disk, DRAM and CPU to focus on other operations.

Some games checks for VRAM size and don't let you ramp up certain graphical settings to the highest possible values due to not having enough VRAM.

Some games rely heavily on VRAM size in higher resolutions, as shown in the video above. Lows are much better with more VRAM. Especially take a look at Last of Us at 4K, RTX 4060 Ti 8 GB is completely messed up. Please, do note that in order to compensate for lack of enough VRAM, driver uses system memory (RAM) for this purpose. RAM not only is slower than VRAM but might be required for other purposes. So, having more VRAM is better because the card will not parasite on other computer's resources.

As shown in the video above, sometimes it eats more than 3 GB from the RAM. Gaming with 16 GB RAM and RTX 4060 Ti 8GB may easily become a stuttering festival past 1080p. Same logic applies to 12 GB, 16 GB, 20 GB, ... When there is not enough video memory in the graphics card, driver will look for it elsewhere.
 
Last edited:
Back
Top