Thursday, September 12th 2024

GDDR6 GeForce RTX 4070 Tested, Loses 0-1% Performance Against RTX 4070 with GDDR6X

NVIDIA quietly released a variant of the GeForce RTX 4070 featuring slower 20 Gbps GDDR6 memory, replacing the 21 Gbps GDDR6X that the original RTX 4070 comes with. Wccftech has access to a GALAX branded RTX 4070 GDDR6, and put it through benchmarks focused on comparing it to a regular RTX 4070. Memory type and speed are the only changes in specs, the core-configuration isn't changed, nor is the GPU clock speed. Wccftech's testing shows that the RTX 4070 GDDR6 measures within 0-1% slower than the RTX 4070 (GDDR6X) at 1080p and 1440p resolutions; while the difference between the two is about 2% at 4K Ultra HD.

Wccftech's test-bed is comprehensive, with 27 game tests, each across 3 resolutions; and 7 synthetic tests. The synthetic tests are mainly part of the 3DMark test suite, including Speed Way, Fire Strike, Time Spy, Port Royal, and their presets. Here, the RTX 4070 GDDR6 is nearly identical in performance, with a 0-0.2% delta with the RTX 4070 GDDR6X. In the game tests, performance varies by resolution. 1080p has 0-1% performance delta, with the only noteworthy outliers being "Metro Exodus" (extreme preset), where the RTX 4070 GDDR6 loses 4.2%, and "Alan Wake 2," where it loses 2.3%.
The story is somewhat similar with 1440p, with no significant delta across most tests. Outliers include "Metro Exodus" (extreme), where the GDDR6 model bleeds 9.1% performance, and "Alan Wake 2" (-4.8%). In our launch review of the original RTX 4070, we remarked that the card is very much capable of 4K gameplay, if you know your way around game settings, or use DLSS, and so the 4K numbers are somewhat relevant. Averaged across all games, the delta is 2.3%, but most games seem to lose 1-2%, with the outliers extending their margins. "Metro Exodus" is now 10.4% slower, "Alan Wake 2" 2.8% slower, "Dead Space" 2.5% slower, and quite a few others posting losses in excess of 2%.
The 20 Gbps GDDR6 memory option, just in numerical terms, presents a 4.75% drop in memory bandwidth over the 21 Gbps GDDR6X, and we get to see just how much the specs change affects performance. Cards like the GALAX 1-click OC 2X that Wccftech tested, some with clear labelling on the box that mention GDDR6 memory type. Initial availability on Newegg showed that RTX 4070 GDDR6 cards, specifically an ASUS DUAL OC, wasn't priced any lower than its GDDR6X variant. We hope this changes, and the RTX 4070 GDDR6 ends up priced lower enough to match the original in performance/price.
Sources: Wccftech, VideoCardz
Add your own comment

39 Comments on GDDR6 GeForce RTX 4070 Tested, Loses 0-1% Performance Against RTX 4070 with GDDR6X

#1
Beginner Macro Device
As expected. The main question is how this affects power efficiency. Can you squeeze even more FPS per W with G6?
Posted on Reply
#2
Ruru
S.T.A.R.S.
And a difference like this is practically in the margin of error. I'm actually surprised as I thought that this would be noticeable slower than a GDDR6X variant.
Posted on Reply
#3
SIGSEGV
so what's the point of using GDDRX, then? cheaper solution or what??
Posted on Reply
#4
las
4070 GDDR6 running same clocks? I mean, watt usage must have dropped by 25-50 watts by using slower memory, which might have made them increase GPU clocks?
SIGSEGVso what's the point of using GDDRX, then? cheaper solution or what??
GDDR6X is better than GDDR6 (more bandwidth and error correcting), thats why 4070 GDDR6 version loses at higher res while performance is similar at lower res (no bandwidth issues)

However, 4070 is not exactly a 4K/UHD solution so it probably won't matter for 99.9% of buyers


I still want to see the GPU clockspeeds and power usage here, tho.
Posted on Reply
#5
phints
I've had my RTX 4070 FE for almost 1 1/2 years now, so seeing this slower GDDR6 memory version perform about the same is moot to me, but it's definitely a good buy if you can grab one a little cheaper, at say $499. Since we are likely within 6 months from RTX 5000 announcement I'd recommend holding off.

Thinking back 3-4 years ago when the 3070 Ti came out with GDDR6X and what an insane amount of power it drew compared to the 3070 makes me wonder if Nvidia should have gone down to the slower GDDR6 back then too.
Posted on Reply
#6
persondb
lasGDDR6X is better than GDDR6 (more bandwidth and error correcting), thats why 4070 GDDR6 version loses at higher res while performance is similar at lower res (no bandwidth issues)

However, 4070 is not exactly a 4K/UHD solution so it probably won't matter for 99.9% of buyers
The results were within margin of error though, so there was no difference.
Posted on Reply
#7
las
persondbThe results were within margin of error though, so there was no difference.
Thats why I am interrested in seeing clockspeeds / power usage
Posted on Reply
#8
Vya Domus
RuruI thought that this would be noticeable slower than a GDDR6X variant.
Why would you think that, it's literally just 5% slower, that would have been the largest possible performance difference.
Posted on Reply
#9
Ruru
S.T.A.R.S.
Vya DomusWhy would you think that, it's literally just 5% slower, that would have been the largest possible performance difference.
Dunno. Slower is always slower, but I thought that the gap would be wider.
Posted on Reply
#10
InVasMani
Seems like just a means to soften the impact that hardware will have in that product tier when new hardware finally launches. They'll probably use it as a means to cut and slash prices a bit when new hardware arrives slightly before or after to dampen the impact in that performance tier. Since it doesn't impact performance much and reduces costs it'll make it a little more flexible. It'll probably be reasonable to slower than what's coming at some SKU tier based on what their predicting and expecting so cut the manufacturing cost a bit with some slower memory that barely impacts performance and make it a bit more affordable the way it probably should've and would've been launched originally had they had stronger competition. That's just my speculation though. They have a history of this kind of thing.
RuruDunno. Slower is always slower, but I thought that the gap would be wider.
It's not exactly bandwidth starved. Though ironically the 3070Ti was much better on bandwidth with same memory because the memory bus wasn't so narrow however Nvidia paired that with less memory because they've got like GPU market cornered and can get away with that kind of backhanded thing. Their kind of stuck in position of not bullying AMD and/or Intel too much scenario at this point from where they sit. That's probably part of why they decided to **** over consumers with 3070Ti's paltry VRAM capacity like nice memory bus paired with GDDR6X, but 8GB huh much wow.
Posted on Reply
#11
qlum
While this may seem like a nothing burger and definitely is no big deal, losing up to 10% performance for what is supposed to be the same card is not insignificant. 1% average sure, but it depends on what you play, just because there is only one instance here does not mean there are no other games.

Could have used a clearer designation imo
Posted on Reply
#12
chrcoluk
Still a bait and switch in my eyes.
Posted on Reply
#13
Ferrum Master
Is it that hard to put v2 or SE moniker at the end?

You have to screw up the customer, take your chances every time... ? Corporate pickles...
Posted on Reply
#14
Broudka
So, how to distinguish one from another before buying a new card ?
Posted on Reply
#15
rethcirE
BroudkaSo, how to distinguish one from another before buying a new card ?


At least for my standard MSi RTX 4070 it was clearly labeled/listed on the box.
Posted on Reply
#16
wolf
Better Than Native
looks like 99.8% of expected performance, I'm not sure how many pitchforks to grab...
Posted on Reply
#17
yiyide266
Beginner Macro DeviceAs expected. The main question is how this affects power efficiency. Can you squeeze even more FPS per W with G6?
G6 does not mean more efficiency than G6X but lower.
Posted on Reply
#18
lexluthermiester
Beginner Macro DeviceAs expected. The main question is how this affects power efficiency. Can you squeeze even more FPS per W with G6?
My guess is it uses less power. How much less requires testing, but certainly at least a little bit less.
Posted on Reply
#19
AusWolf
This begs the question: why did Nvidia release the 4070 with GDDR6X and not non-X in the first place? Artificial price markup? Bragging rights?
Posted on Reply
#20
lexluthermiester
AusWolfThis begs the question: why did Nvidia release the 4070 with GDDR6X and not non-X in the first place? Artificial price markup? Bragging rights?
Because people would have complained and whined.
Posted on Reply
#21
AusWolf
lexluthermiesterBecause people would have complained and whined.
On the high end, probably, but this is a x70-class GPU, price matters here.
Posted on Reply
#22
JWNoctis
AusWolfOn the high end, probably, but this is a x70-class GPU, price matters here.
Maybe product differentiation i.e. bragging rights at time-of-release? Was GDDR6X a big thing? Arguably un-suffixed x70 is still high-end (or at least "enthusiast"-grade), even though the actual low end has been filled by IGPs.
Posted on Reply
#23
Kasy21
Didn't GDDR6X run hot AF, I heard that in 3000 cards this memory type run much hotter and if they realise that for some cards is not worth the performance gain I think they made the right decision
Posted on Reply
#24
las
yiyide266G6 does not mean more efficiency than G6X but lower.
G6 uses less power than GDDR6X because its slower
Kasy21Didn't GDDR6X run hot AF, I heard that in 3000 cards this memory type run much hotter and if they realise that for some cards is not worth the performance gain I think they made the right decision
Not really, the GDDR6X on my 4090 sits at like 60-65C in demanding games, meaning 100% gpu usage
Posted on Reply
#25
Kasy21
lasG6 uses less power than GDDR6X because its slower


Not really, the GDDR6X on my 4090 sits at like 60-65C in demanding games, meaning 100% gpu usage
yeah a 4090, but I was refering to the cooler on something like a cheap aib model of a 3060ti, in the case of some cheap 4070, that are made by some companies they have weaker cooling solutions, is not un common to have some companies to skimp in the colling departement, like not having thermal pads in the memory modules
Posted on Reply
Add your own comment
Nov 5th, 2024 21:51 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts