• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GDDR6 GeForce RTX 4070 Tested, Loses 0-1% Performance Against RTX 4070 with GDDR6X

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA quietly released a variant of the GeForce RTX 4070 featuring slower 20 Gbps GDDR6 memory, replacing the 21 Gbps GDDR6X that the original RTX 4070 comes with. Wccftech has access to a GALAX branded RTX 4070 GDDR6, and put it through benchmarks focused on comparing it to a regular RTX 4070. Memory type and speed are the only changes in specs, the core-configuration isn't changed, nor is the GPU clock speed. Wccftech's testing shows that the RTX 4070 GDDR6 measures within 0-1% slower than the RTX 4070 (GDDR6X) at 1080p and 1440p resolutions; while the difference between the two is about 2% at 4K Ultra HD.

Wccftech's test-bed is comprehensive, with 27 game tests, each across 3 resolutions; and 7 synthetic tests. The synthetic tests are mainly part of the 3DMark test suite, including Speed Way, Fire Strike, Time Spy, Port Royal, and their presets. Here, the RTX 4070 GDDR6 is nearly identical in performance, with a 0-0.2% delta with the RTX 4070 GDDR6X. In the game tests, performance varies by resolution. 1080p has 0-1% performance delta, with the only noteworthy outliers being "Metro Exodus" (extreme preset), where the RTX 4070 GDDR6 loses 4.2%, and "Alan Wake 2," where it loses 2.3%.



The story is somewhat similar with 1440p, with no significant delta across most tests. Outliers include "Metro Exodus" (extreme), where the GDDR6 model bleeds 9.1% performance, and "Alan Wake 2" (-4.8%). In our launch review of the original RTX 4070, we remarked that the card is very much capable of 4K gameplay, if you know your way around game settings, or use DLSS, and so the 4K numbers are somewhat relevant. Averaged across all games, the delta is 2.3%, but most games seem to lose 1-2%, with the outliers extending their margins. "Metro Exodus" is now 10.4% slower, "Alan Wake 2" 2.8% slower, "Dead Space" 2.5% slower, and quite a few others posting losses in excess of 2%.



The 20 Gbps GDDR6 memory option, just in numerical terms, presents a 4.75% drop in memory bandwidth over the 21 Gbps GDDR6X, and we get to see just how much the specs change affects performance. Cards like the GALAX 1-click OC 2X that Wccftech tested, some with clear labelling on the box that mention GDDR6 memory type. Initial availability on Newegg showed that RTX 4070 GDDR6 cards, specifically an ASUS DUAL OC, wasn't priced any lower than its GDDR6X variant. We hope this changes, and the RTX 4070 GDDR6 ends up priced lower enough to match the original in performance/price.

View at TechPowerUp Main Site | Source
 
As expected. The main question is how this affects power efficiency. Can you squeeze even more FPS per W with G6?
 
And a difference like this is practically in the margin of error. I'm actually surprised as I thought that this would be noticeable slower than a GDDR6X variant.
 
so what's the point of using GDDRX, then? cheaper solution or what??
 
4070 GDDR6 running same clocks? I mean, watt usage must have dropped by 25-50 watts by using slower memory, which might have made them increase GPU clocks?

so what's the point of using GDDRX, then? cheaper solution or what??
GDDR6X is better than GDDR6 (more bandwidth and error correcting), thats why 4070 GDDR6 version loses at higher res while performance is similar at lower res (no bandwidth issues)

However, 4070 is not exactly a 4K/UHD solution so it probably won't matter for 99.9% of buyers


I still want to see the GPU clockspeeds and power usage here, tho.
 
I've had my RTX 4070 FE for almost 1 1/2 years now, so seeing this slower GDDR6 memory version perform about the same is moot to me, but it's definitely a good buy if you can grab one a little cheaper, at say $499. Since we are likely within 6 months from RTX 5000 announcement I'd recommend holding off.

Thinking back 3-4 years ago when the 3070 Ti came out with GDDR6X and what an insane amount of power it drew compared to the 3070 makes me wonder if Nvidia should have gone down to the slower GDDR6 back then too.
 
GDDR6X is better than GDDR6 (more bandwidth and error correcting), thats why 4070 GDDR6 version loses at higher res while performance is similar at lower res (no bandwidth issues)

However, 4070 is not exactly a 4K/UHD solution so it probably won't matter for 99.9% of buyers
The results were within margin of error though, so there was no difference.
 
The results were within margin of error though, so there was no difference.
Thats why I am interrested in seeing clockspeeds / power usage
 
I thought that this would be noticeable slower than a GDDR6X variant.
Why would you think that, it's literally just 5% slower, that would have been the largest possible performance difference.
 
Why would you think that, it's literally just 5% slower, that would have been the largest possible performance difference.
Dunno. Slower is always slower, but I thought that the gap would be wider.
 
Seems like just a means to soften the impact that hardware will have in that product tier when new hardware finally launches. They'll probably use it as a means to cut and slash prices a bit when new hardware arrives slightly before or after to dampen the impact in that performance tier. Since it doesn't impact performance much and reduces costs it'll make it a little more flexible. It'll probably be reasonable to slower than what's coming at some SKU tier based on what their predicting and expecting so cut the manufacturing cost a bit with some slower memory that barely impacts performance and make it a bit more affordable the way it probably should've and would've been launched originally had they had stronger competition. That's just my speculation though. They have a history of this kind of thing.

Dunno. Slower is always slower, but I thought that the gap would be wider.

It's not exactly bandwidth starved. Though ironically the 3070Ti was much better on bandwidth with same memory because the memory bus wasn't so narrow however Nvidia paired that with less memory because they've got like GPU market cornered and can get away with that kind of backhanded thing. Their kind of stuck in position of not bullying AMD and/or Intel too much scenario at this point from where they sit. That's probably part of why they decided to **** over consumers with 3070Ti's paltry VRAM capacity like nice memory bus paired with GDDR6X, but 8GB huh much wow.
 
Last edited:
While this may seem like a nothing burger and definitely is no big deal, losing up to 10% performance for what is supposed to be the same card is not insignificant. 1% average sure, but it depends on what you play, just because there is only one instance here does not mean there are no other games.

Could have used a clearer designation imo
 
Still a bait and switch in my eyes.
 
Is it that hard to put v2 or SE moniker at the end?

You have to screw up the customer, take your chances every time... ? Corporate pickles...
 
So, how to distinguish one from another before buying a new card ?

package2.jpg


At least for my standard MSi RTX 4070 it was clearly labeled/listed on the box.
 
looks like 99.8% of expected performance, I'm not sure how many pitchforks to grab...
 
This begs the question: why did Nvidia release the 4070 with GDDR6X and not non-X in the first place? Artificial price markup? Bragging rights?
 
On the high end, probably, but this is a x70-class GPU, price matters here.
Maybe product differentiation i.e. bragging rights at time-of-release? Was GDDR6X a big thing? Arguably un-suffixed x70 is still high-end (or at least "enthusiast"-grade), even though the actual low end has been filled by IGPs.
 
Didn't GDDR6X run hot AF, I heard that in 3000 cards this memory type run much hotter and if they realise that for some cards is not worth the performance gain I think they made the right decision
 
G6 does not mean more efficiency than G6X but lower.
G6 uses less power than GDDR6X because its slower

Didn't GDDR6X run hot AF, I heard that in 3000 cards this memory type run much hotter and if they realise that for some cards is not worth the performance gain I think they made the right decision
Not really, the GDDR6X on my 4090 sits at like 60-65C in demanding games, meaning 100% gpu usage
 
Back
Top