• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 4070 Founders Edition

The Navi 21 cards look bizarrely inefficient next to this thing.
Of course it does, even just going by die size Navi 21 isn't even comparable, although the 4070 should really be a 4060 due to die size and performance, going from 3070 for 4070 isn't enough a improvement compared to previous gen x70 tier cards.
If you intentionally overlook every shortcoming, setback and problem AMD's older GPUs have, yes.

Being worse at RT, media handling, encoding, at over double the power consumption and subject to Radeon driver shenanigans many of which have driven even diehards (speaking for myself here) away?

Pass.
Anyone that cares enough about RT shouldn't be buying the low end cards, 4070 doesn't provide the RT level of performance it should be for $600, but Nvidia can price the card at what they want and reviewers and fans will still hype up a really unexciting card.
IMO, media encoding and power consumption aren't enough good enough reasons to recommend a 4070 over other options, and driver issues are way overblown including people whining about not getting updates for a few months.
It saves you a lot of power. With my usage and location it costs about the same as a 6800XT but would save me about 70€ a year in energy consumption.
It will still take years for the card to pay for the difference, for people into the hobby of PC gaming that are upgrading every 2 years, as the 3070 became outdated due to VRAM, I doubt most people will notice the difference in power consumption unless the card is constantly at full load.
 
Last edited:
Popularity I'd NOT synonymous with superiority....if it was, we'd all agree that McDonald's has the highest quality food because they sell the most of it.

While I agree, I think i have outlined at length that there's nothing superior, or redeemable, about picking a last-gen AMD (or even NVIDIA - save for the miracle of finding an RTX 3090 for $600) card over this
 
100.628437%... 0.628% faster in classic raster, while being 22% slower in ray tracing, the standard that is being used in all modern game engines moving forward, while consuming 100 W more when gaming. Seems like a wise choice to go for the "winning" RX 6800 XT!

That's also not taking into account DLSS and frame generation.
Too bad Ray Tracing is de-facto unusable on any game going forward due to a measly 12GB, whereas the 6800XT carries 16GB.

Nice try but you've got a big green haze in front of your eyes. Its hilarious. First you cherry pick efficiency and relative RT at 1440p and then leaving out the fact that in raster, its not even faster than a two year old card that can be had for less. Then to top it off 'not taking into account DLSS and FG'... which has shown to go obsolete gen to gen as Ampere can't even do it :D Mate, do you even think.

You've lost all credibility to me both as member and staff. You can't even put this under benefit of the doubt sort of statements, this is just plain awkward nonsense you're spewing.

The 4070 is super DOA and if you buy/bought one, you're an idiot, simple. How to confirm this? Just place its spec list next to a 6 year old 1080ti. Its arguably worse at the same MSRP. And you go 'but yay +22% RT perf over last gen's competitor product in 2023'. Hilariously silly. Here we are in 2023 with a card with the same bandwidth and 1GB VRAM gained vs a 1080ti and there's actually people calling it good because they can cripple their perf with RTX ON.

Oh but of course, this is just an overall relative chart, and 22% is 'in fact a lot more where you have more RT'... :D The bottom line here truly is that RT is still in its infancy and you're a fool paying premium on it. You can safely leave it to Nvidia to keep your 'premium' game experience at sub 60 FPS to keep that buy urge alive! And then safely leave it to parrots like you to sell it. Jokers.
 
Last edited:
Too bad Ray Tracing is de-facto unusable on any game going forward due to a measly 12GB, whereas the 6800XT carries 16GB.

Nice try but you've got a big green haze in front of your eyes. Its hilarious. First you cherry pick efficiency and relative RT at 1440p and then leaving out the fact that in raster, its not even faster than a two year old card that can be had for less. Then to top it off 'not taking into account DLSS and FG'... which has shown to go obsolete gen to gen as Ampere can't even do it :D Mate, do you even think.

You've lost all credibility to me both as member and staff. You can't even put this under benefit of the doubt sort of statements, this is just plain awkward nonsense you're spewing.

The 4070 is super DOA and if you buy/bought one, you're an idiot, simple. How to confirm this? Just place its spec list next to a 6 year old 1080ti. Its arguably worse at the same MSRP. And you go 'but yay +22% RT perf over last gen's competitor product in 2023'. Hilariously silly. Here we are in 2023 with a card with the same bandwidth and 1GB VRAM gained vs a 1080ti and there's actually people calling it good because they can cripple their perf with RTX ON.

Oh but of course, this is just an overall relative chart, and 22% is 'in fact a lot more where you have more RT'... :D The bottom line here truly is that RT is still in its infancy and you're a fool paying premium on it. You can safely leave it to Nvidia to keep your 'premium' game experience at sub 60 FPS to keep that buy urge alive! And then safely leave it to parrots like you to sell it. Jokers.
Worse than a 1080 Ti and DOA eh?

OK bud.

We'll see how well it sells.

Screenshot_20230413-115800_Opera.png
 
Too bad Ray Tracing is de-facto unusable on any game going forward due to a measly 12GB, whereas the 6800XT carries 16GB.

Nice try but you've got a big green haze in front of your eyes. Its hilarious. First you cherry pick efficiency and relative RT at 1440p and then leaving out the fact that in raster, its not even faster than a two year old card that can be had for less. Then to top it off 'not taking into account DLSS and FG'... which has shown to go obsolete gen to gen as Ampere can't even do it :D Mate, do you even think.

You've lost all credibility to me both as member and staff. You can't even put this under benefit of the doubt sort of statements, this is just plain awkward nonsense you're spewing.

The 4070 is super DOA and if you buy/bought one, you're an idiot, simple. How to confirm this? Just place its spec list next to a 6 year old 1080ti. Its arguably worse at the same MSRP. And you go 'but yay +22% RT perf over last gen's competitor product in 2023'. Hilariously silly. Here we are in 2023 with a card with the same bandwidth and 1GB VRAM gained vs a 1080ti and there's actually people calling it good because they can cripple their perf with RTX ON.

Oh but of course, this is just an overall relative chart, and 22% is 'in fact a lot more where you have more RT'... :D The bottom line here truly is that RT is still in its infancy and you're a fool paying premium on it. You can safely leave it to Nvidia to keep your 'premium' game experience at sub 60 FPS to keep that buy urge alive! And then safely leave it to parrots like you to sell it. Jokers.
Indeed, I wouldn't touch it with dgistefanis wallet/purse.

luckily, for Nvidia some Don't pay attention to reviews.
 
Outdated irrelevant chart that comes from the... same source as I posted. Techspot is run by the HUB guys, in case you didn't know.

Never mind, but original point stands. Too many aspects where it is inferior to be considered an option on equal footing.
Not sure what you or others are rambling about. We're talking about price/performance or cost per frame.

16GB 6950XT $600/146 = $4.11
16GB 6800. $470/111 = $4.23
16GB. 6800XT $540/126 = $4.29
12GB 4070. $600/126 = $4.76
20GB 7900XT. $780/161 = $4.84
24GB 7900XTX $950/183 = $5.19

Your 4070 looks pretty bad doesn't it? And that's the best scenario if you get lucky getting it at msrp. Pick a $650 AIB and you're looking at $5.16. big yikes.
 
...Regarding RDNA 3: it is exceptionally unusual that AMD, the supposedly consumer friendly company, hasn't serviced the highest volume markets in the midrange, and i'll call it: that reason isn't that the midrange is supplied by previous generation overstock. It's because if the 7900 series are anything to go by... they have a stinker in their hands.
100% agree. Which is also why I am skipping this money grab gen from both camps entirely.
 
Too bad Ray Tracing is de-facto unusable on any game going forward due to a measly 12GB, whereas the 6800XT carries 16GB.

Nice try but you've got a big green haze in front of your eyes. Its hilarious. First you cherry pick efficiency and relative RT at 1440p and then leaving out the fact that in raster, its not even faster than a two year old card that can be had for less. Then to top it off 'not taking into account DLSS and FG'... which has shown to go obsolete gen to gen as Ampere can't even do it :D Mate, do you even think.

You've lost all credibility to me both as member and staff. You can't even put this under benefit of the doubt sort of statements, this is just plain awkward nonsense you're spewing.

The 4070 is super DOA and if you buy/bought one, you're an idiot, simple. How to confirm this? Just place its spec list next to a 6 year old 1080ti. Its arguably worse at the same MSRP. And you go 'but yay +22% RT perf over last gen's competitor product in 2023'. Hilariously silly. Here we are in 2023 with a card with the same bandwidth and 1GB VRAM gained vs a 1080ti and there's actually people calling it good because they can cripple their perf with RTX ON.

Oh but of course, this is just an overall relative chart, and 22% is 'in fact a lot more where you have more RT'... :D The bottom line here truly is that RT is still in its infancy and you're a fool paying premium on it. You can safely leave it to Nvidia to keep your 'premium' game experience at sub 60 FPS to keep that buy urge alive! And then safely leave it to parrots like you to sell it. Jokers.

He has a point. Especially once it's equalized for energy consumption, it's a bloodbath. Even accounting RT, it's still a better option, you'd be turning settings down on the 6800 XT anyway because it doesn't have enough processing power to handle it, meanwhile, if VRAM starvation was such a critical concern, the RTX 3090 should be beating the 4070 Ti into a pulp, but it's not. Their performance is roughly equal, with the 3090 Ti only a few points ahead of both, nothing worth mentioning. This is likely due to Ada's new, more efficient way to resolve BVH intersections.

8 GB is low, but 12 will do okay for some time to come. Nvidia being stingy with VRAM is nothing new, and that hasn't caused AMD's cards to magically become faster, by the time it truly mattered, both were long since obsolete. Sure, I 200% agree that RAM/VRAM requirements are rising and will continue to rise (hilarious we are having this conversation since I am usually the guy who openly defends throwing RAM at problems), but I have to argue that 12 GB is still well within the comfort zone for what's this thing is intended to do, 1080p to 1440p gaming, I wouldn't buy it, personally, but that's because I'm the kind of guy who likes to buy the good stuff - sadly priced out of reach now.

Still on the market for an affordable, secondary GPU... even a Vega 56 or 64 would do me well, their price has been going down and some gems have been showing up, I wonder if I'll get lucky?

100% agree. Which is also why I am skipping this money grab gen from both camps entirely.

Against my will, same here. I wanted the 4090, but not at the prices it's being sold. If GPU market doesn't improve, I will use my 3090 until it croaks.

Not sure what you or others are rambling about. We're talking about price/performance or cost per frame.

16GB 6950XT $600/146 = $4.11
16GB 6800. $470/111 = $4.23
16GB. 6800XT $540/126 = $4.29
12GB 4070. $600/126 = $4.76
20GB 7900XT. $780/161 = $4.84
24GB 7900XTX $950/183 = $5.19

Your 4070 looks pretty bad doesn't it? And that's the best scenario if you get lucky getting it at msrp. Pick a $650 AIB and you're looking at $5.16. big yikes.

No, it doesn't look bad at all, considering that this is far from the only metric that matters.
 
Worse than a 1080 Ti and DOA eh?

OK bud.

We'll see how well it sells.

View attachment 291354

Yes, worse, reading comprehension, try it sometime. Cognitive skills required

1681384758772.png


1080ti:

1681384807697.png


'This is fine'

You just confirmed again that you can't see the right specs in the right relation. Well done. I did even spell it out for you - twice now. Your input won't be missed going forward. You're not entirely wrong, I'm sure the 4070 will sell better as a mainstream card! Its also a given the vast majority of buyers aren't as knowledgeable, and you're sharing their level of knowledge clearly. Again, well done, I hope your proofreader tag doesn't deteriorate further in its credibility, you're sub zero on my scale. The above numbers simply don't lie.
 
Last edited:
Yes, worse, reading comprehension, try it sometime. Cognitive skills required

View attachment 291358

1080ti:

View attachment 291359

'This is fine'

You just confirmed again that you can't see the right specs in the right relation. Well done. I did even spell it out for you - twice now. Your input won't be missed going forward.
504 is worse than 484?

Ok bud.

:laugh:

Even if your critical thinking is truly this simplistic, comparing specs without looking at actual performance (where the 4070 is almost twice as fast), 484 is a lower number and therefore worse?
 
He has a point. Especially once it's equalized for energy consumption, it's a bloodbath. Even accounting RT, it's still a better option, you'd be turning settings down on the 6800 XT anyway because it doesn't have enough processing power to handle it, meanwhile, if VRAM starvation was such a critical concern, the RTX 3090 should be beating the 4070 Ti into a pulp, but it's not. Their performance is roughly equal, with the 3090 Ti only a few points ahead of both, nothing worth mentioning. This is likely due to Ada's new, more efficient way to resolve BVH intersections.

8 GB is low, but 12 will do okay for some time to come. Nvidia being stingy with VRAM is nothing new, and that hasn't caused AMD's cards to magically become faster, by the time it truly mattered, both were long since obsolete. Sure, I 200% agree that RAM/VRAM requirements are rising and will continue to rise (hilarious we are having this conversation since I am usually the guy who openly defends throwing RAM at problems), but I have to argue that 12 GB is still well within the comfort zone for what's this thing is intended to do, 1080p to 1440p gaming, I wouldn't buy it, personally, but that's because I'm the kind of guy who likes to buy the good stuff - sadly priced out of reach now.

Still on the market for an affordable, secondary GPU... even a Vega 56 or 64 would do me well, their price has been going down and some gems have been showing up, I wonder if I'll get lucky?



Against my will, same here. I wanted the 4090, but not at the prices it's being sold. If GPU market doesn't improve, I will use my 3090 until it croaks.



No, it doesn't look bad at all, considering that this is far from the only metric that matters.
What metric is infinitely better??? If you care so much about getting the best then get the 4090.

I can order a $470 6800 and I'll just laugh at the 3070/3070i/3080/4070 users.
 
Yes, worse, reading comprehension, try it sometime. Cognitive skills required

View attachment 291358

1080ti:

View attachment 291359

'This is fine'

You just confirmed again that you can't see the right specs in the right relation. Well done. I did even spell it out for you - twice now. Your input won't be missed going forward.

Raw memory bandwidth has ceased to matter for some time now, with the advent of large caches and efficient memory compression/lossless data management algorithms. Remember that the RTX 3090's memory bandwidth is in the vicinity of - and with a very quick OC - can exceed the terabyte per second mark, the same with AMD's old Radeon VII (mine hit 1.25 TB/s easily) - but this is still slower, off-die memory. That the 6900 XT does what it does with half its competitor's memory bandwidth is entirely down to cache, and why high hit rates are essential for it to upkeep performance.

If I had to guess, the 484 GB/s of GTX 1080 Ti would be easily met by around half of that on an RDNA 2 design such as the 6600 XT... and say, don't they perform about the same too? I think the 6600 XT is actually around 10% faster if I recall correctly.
 
100% agree. Which is also why I am skipping this money grab gen from both camps entirely.
Virtually no one should be buying generation after generation, like phones there is little to be gained by yearly updates


Except for the fact that NVIDIA owners and users are/have been trained to pay up for a new GPU every two years because Vram ran out.

Or you have the wrong tensor cores doing nothing etc etc.

Yet that's a bonus to some, the Rx580 showed how to avoid EWaste , the 3070 and it's 4070 replacement are showing how to Make EWaste.
 
504 is worse than 484?

Ok bud.

:laugh:

Even if your critical thinking is truly this simplistic, comparing specs without looking at actual performance (where the 4070 is almost twice as fast), 484 is a lower number and therefore worse?
It is pretty much the same isn't it, while the core power, as you correctly point out, is virtually doubled. I've made that very comparison, its same ballpark, just like your comment on 'its faster' and then pointing out percentile gaps on raster, I agree, that's the same perf on raster.

See and this kind of bullshit response from your end, is why you lose all credibility every time. Everyone with non hazy vision can see the problem in relative specs core to VRAM, except you.

Raw memory bandwidth has ceased to matter for some time now, with the advent of large caches and efficient memory compression/lossless data management algorithms. Remember that the RTX 3090's memory bandwidth is in the vicinity of - and with a very quick OC - can exceed the terabyte per second mark, the same with AMD's old Radeon VII (mine hit 1.25 TB/s easily) - but this is still slower, off-die memory. That the 6900 XT does what it does with half its competitor's memory bandwidth is entirely down to cache, and why high hit rates are essential for it to upkeep performance.

If I had to guess, the 484 GB/s of GTX 1080 Ti would be easily met by around half of that on an RDNA 2 design such as the 6600 XT... and say, don't they perform about the same too? I think the 6600 XT is actually around 10% faster if I recall correctly.
And yet cache also turns into an achilles heel for even AMD at 4K where it drops off against Nvidia's 4090. At that point, they're saved (most of the time) by hard throughput being at 800GBps still on a 7900XT, to an extent.

Cache does NOT alleviate constraints in the very use cases where you need it most, which is with heavy swapping required due to large amounts of data needed at will. The two are at odds with one another. At that point you are saved somewhat by royal VRAM capacity.

Its a bit like 'I have super boost clocks' under loads where you already exceed useful FPS numbers by miles. Who cares?? Its nice for bench realities, in actual gaming, it doesn't amount to anything. This is where experience comes in. We've seen this all before and crippled bandwidth, real, hard, bandwidth, is and will always be a defining factor.

You put that 6600XT on a higher res, it will die horribly, whereas in relative sense the 1080ti would still be standing upright. I experience this now with a 1080 on 8GB, I can fill the framebuffer, FPS can go down, but the affair is still buttery smooth in frame times.
 
It is pretty much the same isn't it, while the core power, as you correctly point out, is virtually doubled. I've made that very comparison, its same ballpark, just like your comment on 'its faster' and then pointing out percentile gaps on raster, I agree, that's the same perf on raster.

See and this kind of bullshit response from your end, is why you lose all credibility every time. Everyone with non hazy vision can see the problem in relative specs core to VRAM, except you.
Non hazy vision eh?

Based off assumptions that VRAM should scale with performance indefinitely and that zero advances have been made in other regards...

Surely we would see these catastrophic effects when playing at 4K, no?
Raw memory bandwidth has ceased to matter for some time now, with the advent of large caches and efficient memory compression/lossless data management algorithms. Remember that the RTX 3090's memory bandwidth is in the vicinity of - and with a very quick OC - can exceed the terabyte per second mark, the same with AMD's old Radeon VII (mine hit 1.25 TB/s easily) - but this is still slower, off-die memory. That the 6900 XT does what it does with half its competitor's memory bandwidth is entirely down to cache, and why high hit rates are essential for it to upkeep performance.

If I had to guess, the 484 GB/s of GTX 1080 Ti would be easily met by around half of that on an RDNA 2 design such as the 6600 XT... and say, don't they perform about the same too? I think the 6600 XT is actually around 10% faster if I recall correctly.
 
Non hazy vision eh?

Based off assumptions that VRAM should scale with performance indefinitely and that zero advances have been made in other regards...

Surely we would see these catastrophic effects when playing at 4K, no?
The scales do tip over to high bandwidth cards excelling at 4K, yes, whats the point here? This has been true since 2015.
Where did anyone say VRAM should scale with performance indefinitely? That zero advances are made in other regards? We all acknowledge the use of cache. But we also need to judge it for what it really does and for where it presents a limitation.

Nuance. You're missing it, and again, your style of discussing this confirms everything once more. Your input is of little relevance, you prefer to cherry pick the examples where it works out well, omitting the ones where the whole house of cards falls apart. Whereas its exactly those situations where you run into limits that will bother you as an end user, right? Its the same as touting 500 FPS in CS GO to favor an Intel CPU over an X3D (that's not you per say, but we've seen it on TPU). Completely ridiculous nonsense.

Or - just a difference of perspective, let's keep playing nice - where you are content with cards that have life expectancy of 2-3 years, and I expect at least double that from a highly priced product. That's really the only argument you could possibly have for promoting cards with lackluster specs for the money. If you do actually upgrade gen-to-gen or bi-gen, that's a reasonable approach regardless. I don't, I think its a total waste of time to upgrade for 25-30%.
 
Last edited:
It is pretty much the same isn't it, while the core power, as you correctly point out, is virtually doubled. I've made that very comparison, its same ballpark, just like your comment on 'its faster' and then pointing out percentile gaps on raster, I agree, that's the same perf on raster.

See and this kind of bullshit response from your end, is why you lose all credibility every time. Everyone with non hazy vision can see the problem in relative specs core to VRAM, except you.


And yet cache also turns into an achilles heel for even AMD at 4K where it drops off against Nvidia's 4090. At that point, they're saved (most of the time) by hard throughput being at 800GBps still on a 7900XT, to an extent.

Cache does NOT alleviate constraints in the very use cases where you need it most, which is with heavy swapping required due to large amounts of data needed at will. The two are at odds with one another.

Its a bit like 'I have super boost clocks' under loads where you already exceed useful FPS numbers by miles. Who cares?? Its nice for bench realities, in actual gaming, it doesn't amount to anything. This is where experience comes in. We've seen this all before and crippled bandwidth, real, hard, bandwidth, is and will always be a defining factor.

You put that 6600XT on a higher res, it will die horribly, whereas in relative sense the 1080ti would still be standing upright. I experience this now with a 1080 on 8GB, I can fill the framebuffer, FPS can go down, but the affair is still buttery smooth in frame times.

I agree with the point that it's an achilles heel, but unfortunately, that's a problem inherent to AMD's gamble on relying on their last-level cache for bandwidth and using slower G6 modules to save on the BOM. It's a compromise their engineers felt that was fair, at least at the time.

The 6600 XT will die horribly, yes, but that's likely due to smaller frontend, lower bandwidth that its smaller infinity cache can't overcome and 8 GB attached to just two channels (128-bit), but given what it is, a low-cost GPU intended for 1080p gaming, it's a valiant effort IMO. RDNA 3's approach was smart, they decoupled the cache from the GCD and attached it to the MCDs, giving each channel a lot more cache to work with. Ampere's 6 MB L2 + roughly TB worth of bandwidth x Navi 21's 128 MB L3 + 512 GB/s of bandwidth or so has mostly ended in a draw, with the drawbacks of AMD's design only showing at extreme resolutions well beyond what anything reasonable to ask out of a 256-bit bus GPU.

It leaves only one question: why is RDNA 3 underperforming so much? It's either horribly broken, or AMD's software division has a lot, and I mean a lot to explain.
 
I agree with the point that it's an achilles heel, but unfortunately, that's a problem inherent to AMD's gamble on relying on their last-level cache for bandwidth and using slower G6 modules to save on the BOM. It's a compromise their engineers felt that was fair, at least at the time.

The 6600 XT will die horribly, yes, but that's likely due to smaller frontend, lower bandwidth that its smaller infinity cache can't overcome and 8 GB attached to just two channels (128-bit), but given what it is, a low-cost GPU intended for 1080p gaming, it's a valiant effort IMO. RDNA 3's approach was smart, they decoupled the cache from the GCD and attached it to the MCDs, giving each channel a lot more cache to work with. Ampere's 6 MB L2 + roughly TB worth of bandwidth x Navi 21's 128 MB L3 + 512 GB/s of bandwidth or so has mostly ended in a draw, with the drawbacks of AMD's design only showing at extreme resolutions well beyond what anything reasonable to ask out of a 256-bit bus GPU.
But that's the very essence of what I'm getting at. If these cards are supported by a more royal memory subsystem, that core can actually carry them quite a bit longer. Its not unplayable FPS if you end up at 40 minimums, we can experience this ourselves and I do it on the daily. It works just fine - but only AS LONG AS you have stable frametimes. That's the territory we speak of when we speak of longevity. And that IS the longevity I really do expect from x70 class cards and up - it is the longevity they've historically also had.
 
But that's the very essence of what I'm getting at. If these cards are supported by a more royal memory subsystem, that core can actually carry them quite a bit longer. Its not unplayable FPS if you end up at 40 minimums, we can experience this ourselves and I do it on the daily. It works just fine - but only AS LONG AS you have stable frametimes. That's the territory we speak of when we speak of longevity. And that IS the longevity I really do expect from x70 class cards and up - it is the longevity they've historically also had.
lets face it, this is jensen's sollbruchstelle in creating a card that will fail in the exact way you're describing within a handful of years to ensure the user will buy yet another new card.

planned obsolescence, hooray!
 
Not sure what you or others are rambling about. We're talking about price/performance or cost per frame.

16GB 6950XT $600/146 = $4.11
16GB 6800. $470/111 = $4.23
16GB. 6800XT $540/126 = $4.29
12GB 4070. $600/126 = $4.76
20GB 7900XT. $780/161 = $4.84
24GB 7900XTX $950/183 = $5.19

Your 4070 looks pretty bad doesn't it? And that's the best scenario if you get lucky getting it at msrp. Pick a $650 AIB and you're looking at $5.16. big yikes.
The 4070 is MSRP 669 isn't it, not 600? I do know that's the case in EUR, and in reality I'll probably see it start at 700 for the FE.

It'll easily land even above the 7900XTX in cost per frame, as a midrange contender. Its hilariously bad.

I can buy a 7900XT for 836 EUR today and an XTX at just over 1K in the Netherlands. That's a net perf gap of a whopping 50% at roughly same relative cost per frame.

700 was about right it seems, too: (and that's for a bottom end AIB contraption, which for 200W might just be ok)
1681387627187.png

Instant No. To me this feels like paying for a VW UP with all options that drives the exact same as a Renault Twingo or Citroen C1 at stock, I just can't...
 
Last edited:
lets face it, this is jensen's sollbruchstelle in creating a card that will fail in the exact way you're describing within a handful of years to ensure the user will buy yet another new card.

planned obsolescence, hooray!

Lowering texture quality means the GPU is obsolete? :rolleyes:.

I tested Hogswart Legacy with Low Texture Quality vs Ultra Texture Quality and it makes very little difference, saving ~4GB VRAM @ 4K Ultra RT DLSS
link
 
Cheapest 6800XT I can find in Germany is 600€ and cheapest 6950XT is 670€. The 6800XT has roughly the same performance in gaming as the 4070 but worse performance in everything else. The 4070 would also save me 70€ a year in energy cost. So the decision isn't really as easy as just comparing gaming performance and sale price and calling it a day.
They sometimes change day by day.
Since RDNA2 is also TSMC, they can be undervolted quite well. Same as the 4070.
Personally I am against buying an old card. But in this case price and performance matters. Currently you either go old AMD for price or Nvidia for RT.

And it is not like the 4070 had ANY RT improvements...raster AND RT is the same as the 3080.
You get +2GB VRAM and 40-50% less powerdraw for a bit less money. That IS a better deal, but not a very attractive one for "Next Gen".

But that will definetly change, when the RX7800XT or 7700XT come out.
The 7700XT will have 12GB VRAM, and probably similar performance to the 4070, but powerdraw and RT will be worse. AMD HAS to sell it for below 499$ to make it attractive (since the 6800XT is ~500$ with 4GB more VRAM).
The 7800XT is the 6900XT competitor with also 16GB VRAM. 599$ might be not low enough for it. 599$ is 649€ and the 6950XT is 649€. To make it viable, it has to be sold for 579$ or less.
It is a real piece of work to make those cards fit into the current market.... how much distance to the other current cards is alright?
 
Popularity I'd NOT synonymous with superiority....if it was, we'd all agree that McDonald's has the highest quality food because they sell the most of it.
Except we are not speaking about hamburgers here, and if people keep choosing Nvidia over AMD, is because of quality issues with AMD.
 
lets face it, this is jensen's sollbruchstelle in creating a card that will fail in the exact way you're describing within a handful of years to ensure the user will buy yet another new card.

planned obsolescence, hooray!

Nah, it's much easier now. Just proclaim new DLSS 4.0 with next generation, and you can lock out the previous generation top of the line 1700 EUR card out of the new "tech", and lower it's usefulness even compared to midrange of new line.

That's planned obsolescence, and you can fine tune it just how crappy you want the old cards to perform compared to the new ones, in the drivers!
 
people are really divided here

i think they are all overpriced, the new 4070 and the old 6*** cards (makes it a bit worst for being a older gen card), wouldn't buy either amd or nvidia at these prices
 
Back
Top