Thursday, September 12th 2024

GDDR6 GeForce RTX 4070 Tested, Loses 0-1% Performance Against RTX 4070 with GDDR6X

NVIDIA quietly released a variant of the GeForce RTX 4070 featuring slower 20 Gbps GDDR6 memory, replacing the 21 Gbps GDDR6X that the original RTX 4070 comes with. Wccftech has access to a GALAX branded RTX 4070 GDDR6, and put it through benchmarks focused on comparing it to a regular RTX 4070. Memory type and speed are the only changes in specs, the core-configuration isn't changed, nor is the GPU clock speed. Wccftech's testing shows that the RTX 4070 GDDR6 measures within 0-1% slower than the RTX 4070 (GDDR6X) at 1080p and 1440p resolutions; while the difference between the two is about 2% at 4K Ultra HD.

Wccftech's test-bed is comprehensive, with 27 game tests, each across 3 resolutions; and 7 synthetic tests. The synthetic tests are mainly part of the 3DMark test suite, including Speed Way, Fire Strike, Time Spy, Port Royal, and their presets. Here, the RTX 4070 GDDR6 is nearly identical in performance, with a 0-0.2% delta with the RTX 4070 GDDR6X. In the game tests, performance varies by resolution. 1080p has 0-1% performance delta, with the only noteworthy outliers being "Metro Exodus" (extreme preset), where the RTX 4070 GDDR6 loses 4.2%, and "Alan Wake 2," where it loses 2.3%.
The story is somewhat similar with 1440p, with no significant delta across most tests. Outliers include "Metro Exodus" (extreme), where the GDDR6 model bleeds 9.1% performance, and "Alan Wake 2" (-4.8%). In our launch review of the original RTX 4070, we remarked that the card is very much capable of 4K gameplay, if you know your way around game settings, or use DLSS, and so the 4K numbers are somewhat relevant. Averaged across all games, the delta is 2.3%, but most games seem to lose 1-2%, with the outliers extending their margins. "Metro Exodus" is now 10.4% slower, "Alan Wake 2" 2.8% slower, "Dead Space" 2.5% slower, and quite a few others posting losses in excess of 2%.
The 20 Gbps GDDR6 memory option, just in numerical terms, presents a 4.75% drop in memory bandwidth over the 21 Gbps GDDR6X, and we get to see just how much the specs change affects performance. Cards like the GALAX 1-click OC 2X that Wccftech tested, some with clear labelling on the box that mention GDDR6 memory type. Initial availability on Newegg showed that RTX 4070 GDDR6 cards, specifically an ASUS DUAL OC, wasn't priced any lower than its GDDR6X variant. We hope this changes, and the RTX 4070 GDDR6 ends up priced lower enough to match the original in performance/price.
Sources: Wccftech, VideoCardz
Add your own comment

39 Comments on GDDR6 GeForce RTX 4070 Tested, Loses 0-1% Performance Against RTX 4070 with GDDR6X

#26
las
Kasy21yeah a 4090, but I was refering to the cooler on something like a cheap aib model of a 3060ti, in the case of some cheap 4070, that are made by some companies they have weaker cooling solutions, is not un common to have some companies to skimp in the colling departement, like not having thermal pads in the memory modules
Not a problem at all - www.techpowerup.com/review/msi-geforce-rtx-4070-gaming-x-trio/37.html

3060 Ti did not use GDDR6X, only 3070 Ti did in the 3060/3070 range

TjMax for GDDR6X is like 95-100C some even list 110-120C but most cards runs them in the 60-80 range
Posted on Reply
#27
Assimilator
AusWolfThis begs the question: why did Nvidia release the 4070 with GDDR6X and not non-X in the first place? Artificial price markup? Bragging rights?
Wasn't G6X significantly faster (in terms of clock speeds) at the time, and G6 has only recently caught up?
Posted on Reply
#28
AusWolf
AssimilatorWasn't G6X significantly faster (in terms of clock speeds) at the time, and G6 has only recently caught up?
Maybe. As G6X is just a slightly faster spinoff version of G6, I never paid much attention to it.
Posted on Reply
#29
azrael
Seems like nVidia saves some money on this. Probably the manufacturers as well. The ones who won't benefit from the cost savings are the consumers. But hey, at least LJM can buy a couple more jackets...
Posted on Reply
#30
Taisho
Power efficiency should be checked at the same memory clock speed, otherwise, people might get the wrong assumption that GDDR6 is more efficient when it's only about the GDDR6X clock much past the efficiency sweet spot.
Posted on Reply
#31
Vya Domus
Kasy21Didn't GDDR6X run hot AF, I heard that in 3000 cards this memory type run much hotter
They also just had crap cooling for the memory.
Posted on Reply
#32
lexluthermiester
AusWolfOn the high end, probably, but this is a x70-class GPU, price matters here.
You're right, that's a fair point. Still, there would have been whiners..
Posted on Reply
#34
evernessince
Beginner Macro DeviceAs expected. The main question is how this affects power efficiency. Can you squeeze even more FPS per W with G6?
It's 4% in HWUB's bench, which turns out to be almost as much as the reduction in bandwidth:


To me this indicates that the 4070 is definitely memory bottlenecked in some titles, because if it wasn't you wouldn't have seen a near 1:1 drop-off in performance.

Mind you the performance difference is really besides the point, the fact remains that Nvidia is using cheaper, slower memory while not passing the savings onto the customers or making it clear in the model name.
TaishoPower efficiency should be checked at the same memory clock speed, otherwise, people might get the wrong assumption that GDDR6 is more efficient when it's only about the GDDR6X clock much past the efficiency sweet spot.


The 4070 GDDR6 model consumes 1.52% less power while being 4% slower. It also consumes more power at idle. So overall a small reduction in efficiency.
Posted on Reply
#35
Macro Device
evernessincethe fact remains that Nvidia is using cheaper, slower memory while not passing the savings onto the customers or making it clear in the model name.
That's because they can get away with it. Too obvious to discuss.
evernessinceoverall a small reduction in efficiency.
Not great. At least I know what GPU to avoid, unless the price is really good.
Posted on Reply
#36
Lew Zealand
evernessinceIt's 4% in HWUB's bench, which turns out to be almost as much as the reduction in bandwidth:


To me this indicates that the 4070 is definitely memory bottlenecked in some titles, because if it wasn't you wouldn't have seen a near 1:1 drop-off in performance.

Mind you the performance difference is really besides the point, the fact remains that Nvidia is using cheaper, slower memory while not passing the savings onto the customers or making it clear in the model name.






The 4070 GDDR6 model consumes 1.52% less power while being 4% slower. It also consumes more power at idle. So overall a small reduction in efficiency.
They tested a Gigabyte AIB card vs the Founder's Edition 4070 (as there's no Founder's of this DDR6 version of course) and if you look at W1zz's tests of various AIB models, the idle power usage can vary model-to-model by around the 2W listed here. Same goes for the load power usage, these are both within model variation so there's little to conclude here.

Also the fps reduction at 1440p where this card's likely to be used is only 3% in their testing. 4% is at 1080p, not really a primary use case for this GPU.

Still with seemingly no price reduction from the GDDR6X model, this provides consumers with a slightly lower value product.
Posted on Reply
#37
evernessince
Lew ZealandThey tested a Gigabyte AIB card vs the Founder's Edition 4070 (as there's no Founder's of this DDR6 version of course) and if you look at W1zz's tests of various AIB models, the idle power usage can vary model-to-model by around the 2W listed here. Same goes for the load power usage, these are both within model variation so there's little to conclude here.
The 4070 GDDR6 tested in the video is a mere 15 MHz higher clocked than a reference 4070: www.gigabyte.com/Graphics-Card/GV-N4070WF3OCV2-12GD/sp#sp

So for all intents and purposes it is a good comparison to the reference model.

FYI a small difference doesn't mean there's nothing to conclude. You are throwing shade on every model variant review with a comment like that. You could perhaps advise waiting for a larger sample size but small doesn't inherently mean the data isn't informing of a genuine difference.
Posted on Reply
#38
Lew Zealand
evernessinceThe 4070 GDDR6 tested in the video is a mere 15 MHz higher clocked than a reference 4070: www.gigabyte.com/Graphics-Card/GV-N4070WF3OCV2-12GD/sp#sp

So for all intents and purposes it is a good comparison to the reference model.

FYI a small difference doesn't mean there's nothing to conclude. You are throwing shade on every model variant review with a comment like that. You could perhaps advise waiting for a larger sample size but small doesn't inherently mean the data isn't informing of a genuine difference.
Yes, I'm very much pointing out that every model variant is subject to variations that are larger than shown in the HUB video. For example, Asus Dual 4070 Super vs the FE 4070 Super, both use stock Nvidia clocks:



4W difference in idle. And I wouldn't be surprised if 5 different samples of both the FE and the Asus Dual overlap in idle power. Same goes for load power. Too little data in any of these examples to make conclusions so IMO all within normal variance until proven otherwise. I'd love to see a HWInfo chart of the GDDR6 vs GDDR6X power use difference in the two 4070s though so see what the actual difference in power draw is.
Posted on Reply
#39
AleXXX666
BroudkaSo, how to distinguish one from another before buying a new card ?
so you clearly want a HOTTER GDDR6X memory?:D:rolleyes:
Posted on Reply
Add your own comment
Jul 22nd, 2025 17:31 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts