• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Looking for guidance, More VRAM or more performance

Um, folks, if you read the OP's original comment, they are considering the RTX4000 series cards OR the RX7000 series cards.

None of the cards mentioned in the last few comments are being considered. Just throwing it out there.
Pff, reading the original request... party pooper.
 
This one says it doesn't:
power-gaming.png

It's ~10% faster for ~10% more power.
Tried it with my wife's 3080 against my 6950XT. Maybe it's a driver thing at this point or something but that 3080 pulled more.
 
The one thing that graph doesn't point out is the limits. Like 3080 FE card has a hard limit of 370 watts with the default of 320 if you don't change the parameters. Reference RX 6950XT default 284w (so the vBIOS say).
 
There is effective marketing (Nvidia) and there is shit marketing - and poor time to market (AMD).

It has an effect. It says little about raw performance or overall quality. It says a lot about the person saying what it says and how they've been influenced. I also needed a few months on AMD before I could shake old Nvidia notions of how things 'should be' or 'should work'. A good example is how RDNA3 overclocks. Its a different thing. Is it worse? Nope. Its probably better. But different.
Don't forget the effect of over paying then defending that any which way. It's strange as hell though, a few threads ago ppl were saying RT is a waste until you get to a 4090 and the 4080 is the biggest non deal yet, here they are get the 4080 if you want RT.
 
Last edited by a moderator:
Tried it with my wife's 3080 against my 6950XT. Maybe it's a driver thing at this point or something but that 3080 pulled more.
Do you both have reference cards? Otherwise it just may up to TDPs set by manufacturers.
 
Do you both have reference cards? Otherwise it just may up to TDPs set by manufacturers.
XFX 6950XT and FE 3080. Both 2x8 PCI power.
 
I would wait and see what the refresh from NVIDIA will bring to the table , maybe GPUs with more vRAM or if you really want the 4080 maybe it will drop in price after the refresh release.
 
If you're not able to pick up what I'm laying down, can't help you because I made it as simple as possible.

Well if its not that its good at ray tracing, and not that its a fair price, is it simply that it performs well? Because I acknowledged that in my first post. My problem was, its price. It certainly doesn't win out any cost per frame charts. And if you don't care about cost per frame.... 4090. If you do, AMD/last gen nvidia. Though spend your money how you will.

Perhaps I just have a problem with the principle of a $1200 MSRP 80 class card. And not even one thats made from the same die as the 90 class.
 
The days of cheap high end cards are gone for now until the next race between the two.
 
I think the second half of that sentence is most likely superfluous.
 
Don't forget the effect of over paying then defending that any which way. It's strange as hell though, a few threads ago ppl were saying RT is a waste until you get to a 4090 and the 4080 is the biggest non deal yet, here they are get the 4080 if you want RT.
Sure but then you're in that specific category of sheep 'that really wants RT'.

All bets are off right if you want the optimal performance for your 'budget' in an open ended technology. Logic was left at the door. No need to search for it.

Perhaps I just have a problem with the principle of a $1200 MSRP 80 class card. And not even one thats made from the same die as the 90 class.
Its a 1200 dollar x104. Nvidia's biggest 'feat' as of yet, and arguably the worst x80 in its history since Kepler.
 
Pff, reading the original request... party pooper.
I didn't mean to be rude of course. Just trying to help the OP as much as possible without making a mess of their train of thought.

The days of cheap high end cards are gone for now until the next race between the two.
That seems to be gearing up. There are rumors floating that AMD has another rabbit to pull out of it's hat and that NVidia's time being King-Of-The-Hill might be over, again.. Never ceases to amaze me how often in history these two companies have traded places. If the rumors are true, it'll be exciting to watch! But I digress...
 
The days of cheap high end cards are gone for now until the next race between the two.
It’s not going to return back to the old price even with the company have competitive products between them … I just accepted it and I’m gonna buy every two gen
 
I first read that as "I'm just gonna buy two every gen"... :ohwell:
 
I have my doubts nvidia will release something that is powerful, affordable, and with enough vram. You can two of the three, but not all three, that could be used as an affordable workstation card, can't have that.

Has anyone else noticed this? The only cards from nvidia with lots of vram are at the bottom of the stack or at the top. 3060: 12GB 3060 ti: 8gb 3070: 8gb 3070 ti: 8gb 3080: 10GB 3080 ti 12GB. Same with the 40 series and the 16gb 4060 ti. Out of all the cards that needed 16GB, why that one? Cynical me says its because of the borked memory interface that makes it unappealing for workstation use.

I mean I hope I'm wrong, I really do. A 16gb 256 bit 4070 is just want we need.
24GB 192 bit 4070ti with dp2.0 @4060ti price would be way more wanted option I'm sure. 4070 with its limited performance will be obsolete very soon, so adding more vram is just wasting money imho
But 24GB 192 bit 5070 @4070 tdp&price and @4070ti performance with dp2.1 ports would be even more interesting with better longevity than any other rtx 4000 gpu ( except 4090 sku ).
 
24GB 192 bit 4070ti with dp2.0 @4060ti price would be way more wanted option I'm sure. 4070 with its limited performance will be obsolete very soon, so adding more vram is just wasting money imho
But 24GB 192 bit 5070 @4070 tdp&price and @4070ti performance with dp2.1 ports would be even more interesting with better longevity than any other rtx 4000 gpu ( except 4090 sku ).
Sorry, what? You think a 256 bit 16gb 4070 would be a waste? Why? The extra bandwidth would help performance and the extra capacity would give it a longer lifespan. I mean if nvidia really wanted they could even replace the gddr6x with gddr6 to save money since the extra channels would make up for the lost bandwidth. I mean it would be better if they didn't but the 7800xt uses gddr6 on a 256 bit bus and still has more bandwidth than a 4070. I agree with you that nvidia should upgrade its ports though.
 
Sorry, what? You think a 256 bit 16gb 4070 would be a waste? Why? The extra bandwidth would help performance and the extra capacity would give it a longer lifespan. I mean if nvidia really wanted they could even replace the gddr6x with gddr6 to save money since the extra channels would make up for the lost bandwidth. I mean it would be better if they didn't but the 7800xt uses gddr6 on a 256 bit bus and still has more bandwidth than a 4070. I agree with you that nvidia should upgrade its ports though.
You dont understand what way GPU are designed I'm afraid. GPU's are not LEGO's - you cant just add extra 64 input lines to get more bandwith. You need to spend a lot of money for new masks and for validation of quite new die and sku. So 256 bit 4070 is just impossible now. On the other hand adding twice dense memory chips @192bit wide bus is really doable the same way as with 128bit bus @4060 ti.
Adding more bandwith to low performance limited resource 4070 cant help at detectable level in games. We can gain at lot more bandwith at the same wide bus with next gen gddr7 chips soon but the extra bandwith must be coupled ( and will be ) with more resources at higher clocks. So doing now expensive - 256 bit wide bus limited to 4070 resources - quite new die to gain maybe 0.5 percent in games have no sense at all if just around the corner are waiting next gen solutions faster 30-40 percent more and more power efficient as well.
 
You dont understand what way GPU are designed I'm afraid. GPU's are not LEGO's - you cant just add extra 64 input lines to get more bandwith. You need to spend a lot of money for new masks and for validation of quite new die and sku. So 256 bit 4070 is just impossible now. On the other hand adding twice dense memory chips @192bit wide bus is really doable the same way as with 128bit bus @4060 ti.
Adding more bandwith to low performance limited resource 4070 cant help at detectable level in games. We can gain at lot more bandwith at the same wide bus with next gen gddr7 chips soon but the extra bandwith must be coupled ( and will be ) with more resources at higher clocks. So doing now expensive - 256 bit wide bus limited to 4070 resources - quite new die to gain maybe 0.5 percent in games have no sense at all if just around the corner are waiting next gen solutions faster 30-40 percent more and more power efficient as well.
I'm not an idiot. But its been done before, infact its been done the only other generation a super series was introduced. 2060 - 192 bit bus. 2060 super - 256 bit bus.



Its not impossible to change bus configuration. Its difficult.
 
I'm not an idiot. But its been done before, infact its been done the only other generation a super series was introduced. 2060 - 192 bit bus. 2060 super - 256 bit bus.
...
Its not impossible to change bus configuration. Its difficult.
I'm afraid you are. 2060 were just defective dies originally designed with 256bit wide bus but disabled to 192 bit wide but working.
So it was not magical bus configuration - just rebranding defective tu-106 and even tu-104 dies as entry level 2060 sku 192 bit wide.
So yes it is possible to rebranding some defective 4080 dies 256bit wide as long as whole 256 bit bus is operational and selling them as 4070 or 4070ti with upgraded wider bus ( but such defective dies can have even worse performance than fine 4070ti ) - all depend how much they are disabled. However its not the way to making them cheaper than regular 4080 dies.

So if you really want you can turn off half area of healthy 4080 die and pretend it is reconfigured 4070 with 256 bit wide bus - but what idiot would be do it this way and for what "gains" ?
BTW it is not even difficult in fact - it is easy doable but it is reasonably limited to defective dies.

Do you think you are smart ass smarter than nVidia's designers who have no clue what wide bus their chips need ?
 
Last edited:
I'm afraid you are. 2060 were just defective dies originally designed with 256bit wide bus but disabled to 192 bit wide but working.
So it was not magical bus configuration - just rebranding defective tu-106 and even tu-104 dies as entry level 2060 sku 192 bit wide.

Yes, that is often how lower tiered cpus and gpus were made. Just not this generation of nvidia gpus. How they were to do it, if they were to do it, is not really my problem.

So it was not magical bus configuration - just rebranding defective tu-106 and even tu-104 dies as entry level 2060 sku 192 bit wide.
So yes it is possible to rebranding some defective 4080 dies 256bit wide as long as whole 256 bit bus is operational and selling them as 4070 or 4070ti with upgraded wider bus ( but such defective dies can have even worse performance than fine 4070ti ) - all depend how much they are disabled. However its not the way to making them cheaper than regular 4080 dies.

So if you really want you can turn off half area of healthy 4080 die and pretend it is reconfigured 4070 with 256 bit wide bus - but what idiot would be do it this way and for what "gains" ?
BTW it is not even difficult in fact - it is easy doable but it is reasonably limited to defective dies.

Do you think you are smart ass smarter than nVidia's designers who have no clue what wide bus their chips need ? ?

Did you miss the part where I said a 16gb 256 bit 4070 was unlikely? I was only responding to somebody else who brought it up who I assume was responding to all the leaks, and I said it would be nice to have. Perhaps they could have designed it that way from the start, I don't know. All I know is the bandwidth limitation of the lower end 40 series is one of their biggest drawbacks. You can see the delta vs other cards grow and grow the higher the resolution goes. I mean they wouldn't be going for a super series if they were happy with sales now. A 24GB clamshell 4070 may be easier but it makes no sense, that much memory isn't needed and it doesn't help the bandwidth issue.

I'm not designing cards for nvidia and frankly its not my job. All I said was A) a 256 bit 16GB 4070 was unlikely B) A hypothetical 256 bit 16GB 4070 would be nice to have and C) a 256 bit 16gb 4070 was not impossible.

I'm afraid you are
Is that really necessary? Seems to me like you just kind of assumed I thought changing a bus config was easy because I said "A 16gb 256 bit 4070 is just want we need." If Nvidia painted themselves into a corner, its not my responsibility to find them a way out.
 
Last edited:
I would like an 16gb 256 bit 4070 / 175W -225W (sweet spot) too at the price of a 3060TI if possible :) , best 1080p card :D
I think NVIDIA has designed this cards with lower VRAM because they want to sell their other series cards with more vram at way higher prices , now you can use studio drivers on RTX cards so why would people buy the more expensive cards if they are mostly the same.
Arent there any 1.5GB vram modules ? Would be interesting even an RTX 4070 with 18GB of vram on a 192bit bus, problem solved :D

1 GB chips1.5 GB chips*2 GB chips
128-bit8 GB12 GB16 GB
192-bit12 GB18 GB24 GB
256-bit16 GB24 GB32 GB
 
Last edited:
I'm not designing cards for nvidia and frankly its not my job. All I said was A) a 256 bit 16GB 4070 was unlikely B) A hypothetical 256 bit 16GB 4070 would be nice to have and C) a 256 bit 16gb 4070 was not impossible.

Is that really necessary? Seems to me like you just kind of assumed I thought changing a bus config was easy because I said "A 16gb 256 bit 4070 is just want we need." If Nvidia painted themselves into a corner, its not my responsibility to find them a way out.
I see you have absolutely no clue what you are talking about.
Config limited 192 bit bus to 256 bit one is the same way "possible" as configuring your 8 GB ram modules to become 32 GB inteasd ;) You have absoluyely no clue what physical limitation mean.
Only Jesus was able to change water into wine. So he could the same way configuring old limited hardware to become newer one ;)

Yes it is really necessary. You are just forcing me to that.
You are here the first one who is able to change nVidia bus size to become wider than its physical size - TSMC and nVidia should hire you ASAP so you could configure old 8 bit uP to become multicore 64 bit ones ;)

With your magical abilities to reconfiguring everything ASML would become bankcrupt now with their EUV scanners no needed anymore ;)

Why to build new expensive 100B transistors hardware if you can reconfiguring old cheap hardware to become even more capable ?

Some hard coded idiots are really funny i see.

Perhaps they could have designed it that way from the start, I don't know.
Yes they could but they found more profitable ways to spend their money.

You dont know but you still insist configuring is possible ;)
Designing new complex hardware in most advanced nodes needs hundred milions $$$ - on the other hand configuring is cheap and easy - its not difficult - but is limited by hardware capabilities. What is beyond physical limitation is just impossible to configure. You cant stretch 192bit wide bus to become 256bit wide cos it is impossible.

I'm not an idiot. But its been done before,
No it wasn't. It was quite different case. 256 bit wide bus downgrading to 192 bit wide - yes it is possible but for you everything is possible just difficult ;)
 
Easy peasy guys, Team Green is all about AI.

Give em a chance to whip up a batch artificially inseminated with more VRAM. Sorta like downloading it only way cooler, yes?

Forget bus, they about to be running trains on that VRAM.
 
@pk67

Okay, now you're just being confrontational for the sake of it. I don't know what to say that I haven't already said. It always would have meant a redesign, but it was just a hypothetical. It wasn't my idea and it's not my job to figure out how to make it viable.
 
Back
Top