• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ICYMI, NVIDIA GeForce RTX 4080 12GB Uses 192-bit Memory Bus

It is "GTX 970-3.5" and "is 10GB on an RTX 3080 enough?" all over again.
There is no excuse for a 192bit VRAM bus on a TOP TIER CARD. It's stupid, plain and simple.
It probably has Content Creation as its target. High performance, reasonable amount of memory (bandwidth does not matter in this segment), lower consumption, lower price. Attractive for integrators.
Maybe. Fair point.
 
You're not wrong, to get 2x my 3080 performance it's looking I'll need to spend double... No thanks.

But waiting for full 4080 reviews should coincide with RDNA3 and I can choose based on something less... Insane.
You have a third option. Don't buy anything and stay with what you have. 3080 is very much capable. You dont need 4080 especially for that price though. I wonder thought, how much faster that 408012GB will be in comparison to 3080 10/12GB.
 
Price is a whooooole different discussion, one which I think is kinda insane.

More just purely, if it has less shaders and less memory bandwidth, if it can equal performance vs a 384-bit GDDR6X subsystem, then that's certainly something
The shader part was easily made up by the significant clock speed increase. As for bandwidth I wonder what kind of cache config it has.
 
Where are the news of new DLSS3 requiring ADA? I want rage flaming high in the forum :D
 
There is no excuse for a 192bit VRAM bus on a TOP TIER CARD. It's stupid, plain and simple.
I think saying that in as many words doesn't account for the performance and all the other spec nuances though, like lets say it equals or bests a 3090Ti and costs significantly less, is it stupid?
 
Is it possible the lower memory bandwidth will mostly show in titles with older engines, and older titles - which reviews usually don't cover?

I know the older titles should pretty much run well even on older cards, but that's not always the case.

You can bog down any card with just 11 year old Skyrim and high enough textures and mods.

And of course VR. On titles like DCS. No DLSS to help you, you need raw power and lots of memory to push high resolution of HP Reverb G2 or Valve Index in DCS and other sims that don't change their graphics engines often.
Exactly. If DLSS 3.0 reduces the memory load (and it does) these may perform extremely well in RTX testing, but be nothing exciting in traditional DX12 titles - and potentially worse in VRAM heavy workloads, be they actual workloads like rendering or AI work, or just people running 16K textures in skyrim at 4K
 
I think saying that in as many words doesn't account for the performance and all the other spec nuances though, like lets say it equals or bests a 3090Ti and costs significantly less, is it stupid?
But does it "best the 3090 Ti", if it bests it just in some cases? Cases reviewers will focus on, I' m sure (Nvidia demands), but it could still lag behind in older titles, older engines, VR high resolutions demanding high rasterisation?
 
4080 16GB - galaxy s22+
4080 12GB - galaxy s22

This new sub-category will take time to ingest but once compered to the cellphone industry tier structure (almost) everyone will accept it.
Still, the confusion regarding performance will stay.
 
Exactly. If DLSS 3.0 reduces the memory load (and it does) these may perform extremely well in RTX testing, but be nothing exciting in traditional DX12 titles - and potentially worse in VRAM heavy workloads, be they actual workloads like rendering or AI work, or just people running 16K textures in skyrim at 4K
If that is the case, it may also show better boosts when using DLSS vs native due to lower mem bandwidth.
 
4080 16GB - galaxy s22+
4080 12GB - galaxy s22

This new sub-category will take time to ingest but once compered to the cellphone industry tier structure (almost) everyone will accept it.
Still, the confusion regarding performance will stay.

You cannot even compare. Each have their own preferences there that are mostly aesthetic, like size and looks, the internal specs are not priority for most. It ain't a card that fits one slot. The added margin for GPU's tho... sheesh. Phones at least consist of much more components justifying the cost around ~1K. GPU still consist of core 4 parts. GPU, memory, VRM and cooling. That's it. Phones have ASIC of memory/CPU/RAM, display/cooling, bunch of cameras, battery, Cell radio, speakers etc.
 
My feeling is since Turing nVidia doesn't care about gaming all that much they focus more on professional applications of their hardware.
 
Price is a whooooole different discussion, one which I think is kinda insane.

More just purely, if it has less shaders and less memory bandwidth, if it can equal performance vs a 384-bit GDDR6X subsystem, then that's certainly something
But can it, because usually that only counts for the special selection Nvidia has in store for us. Software wise. And in that sense, the DX11 era is over, where anything on that API runs so much better on green...
We've already seen 10GB 3080's drown and most of the time cards are limited in memory performance wise, not core, which is the usual MO with Nvidia.

What we have now is 'select DLSS' titles, 'select RT titles' and every gen wants to nudge us further into that 'reality' when in fact there is a massive truckload of gaming outside of it.

So far, I'm completely unimpressed by these specs, if not appalled.
 
You cannot even compare. Each have their own preferences there that are mostly aesthetic, like size and looks, the internal specs are not priority for most. It ain't a card that fits one slot. The added margin for GPU's tho... sheesh. Phones at least consist of much more components justifying the cost around ~1K. GPU still consist of core 4 parts. GPU, memory, VRM and cooling. That's it. Phones have ASIC of memory/CPU/RAM, display/cooling, bunch of cameras, battery, Cell radio, speakers etc.
You can compare it from a marketing point of view.
 
My feeling is since Turing nVidia doesn't care about gaming all that much they focus more on professional applications of their hardware.
I think they know there isn't much to get there, because realistically for rasterized they had performance on point since Pascal. Even the move to 4K isn't lasting enough in terms of performance requirements to keep GPU sales afloat.
 
What we have now is 'select DLSS' titles, 'select RT titles' and every gen wants to nudge us further into that 'reality' when in fact there is a massive truckload of gaming outside of it.
Absolutely ya'll have a point, I want to see the full reviews of how these cards stack up in everything, not cheery picked scenarios.

My experience of a 3080 with all the 2020+ bells and whistles has been really good, and older stuff it absolutely obliterates. Here's hoping the 4080 series can do the same, offer those uplifts in the most demanding scenario's and still smash 'normal' (or older) workloads at high res/fps.
 
So they've printed the 80 series label on a 60/70s tier GPU with the high price to go with it. This is getting ridiculous :shadedshu:
 
  • Sad
Reactions: ARF
Sweet, I predicted RTX 4070 will be 8GB GDDR6 non X, 128bit width bus and run on PCIe Gen 4.0 x8 for the price of current RTX 3080.
 
  • Haha
Reactions: ARF
My experience of a 3080 with all the 2020+ bells and whistles has been really good, and older stuff it absolutely obliterates. Here's hoping the 4080 series can do the same, offer those uplifts in the most demanding scenario's and still smash 'normal' (or older) workloads at high res/fps.

I don't have a feeling my RTX 3080 "obliterates" anything in VR. And DCS, a flight sim with an older engine, or Microsoft Flight Simulator, struggle even in normal 4K, without everything maxed out.
 
Sweet, I predicted RTX 4070 will be 8GB GDDR6 non X, 128bit width bus and run on PCIe Gen 4.0 x8 for the price of current RTX 3080.

And with the performance of the old 3070? :kookoo:
 
I don't have a feeling my RTX 3080 "obliterates" anything in VR. And DCS, a flight sim with an older engine, or Microsoft Flight Simulator, struggle even in normal 4K, without everything maxed out.
VR isn't really what I was referring to, more like any games say 2018 and older (for a 2020 high end card), it has smashed for me at 4k. MSFS2020? yeah good luck there, I hope you have a 5800X3D or 12900K + DDR5 if you expect to relieve the CPU bottleneck and have the GPU actually take a front seat.
Limiting VRAM bandwidth is always stupid and always has been, regardless of the reasoning.
I do believe the 12GB card is a 4070 with an 8 stickered over the 7, so in that sense it offers xx70 tier bandwidth or higher. Me personally? yeah it's useful but ultimately the performance defines what I think, not the spec itself. I do respect how you feel however, that's not an invalid take, just one with a bit more nuance for me, there's a hec of a lot more that goes into a cards resultant performance than just memory subsystem/bandwidth.
 
I do believe the 12GB card is a 4070 with an 8 stickered over the 7, so in that sense it offers xx70 tier bandwidth or higher.
That's still iffy to me, personally. However I agree, this would be better if it were a 4070 with 12GB.

I do respect how you feel however, that's not an invalid take, just one with a bit more nuance for me,
Fair enough. And I do see your points. The I feel this way is because I've been watching card makers gimp an otherwise solid GPU with a skimp VRAM bus for nearly 30 decades. It's one of those things that irritates the crap out of me.
there's a hec of a lot more that goes into a cards resultant performance than just memory subsystem/bandwidth.
True.
 
Last edited:
Back
Top