• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

ASUS GTX 980 Matrix 4 GB

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,746 (3.75/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
The ASUS GTX 980 Matrix is the company's flagship GTX 980 designed to break world records with liquid nitrogen. It comes with an innovative memory-heating circuit that protects against dreaded cold bugs which cause memory instability. The card also features a dual-BIOS and voltage measurement and control points.

Show full review
 
Last edited:
Cool review and card, honestly though a bit disappointed in the stock settings mostly just because I would think with such a high asking price they could start a little higher on the settings.

The overclocks are nice on the card though I wish they went a bit further though the silicon lottery is the reason for that sadly so its luck of the draw. Still, the Matrix cards are very nicely designed cards with a lot of great features including the dual bios that can be reverted back in even of failure which is something I really like. A very nice card to say the least!
 
Specs are wrong for the GTX 970, 56 Rops.
 
Man... the power consumption on these cards always makes me smile.

Specs are wrong for the GTX 970, 56 Rops.

Makes it's performance even more impressive doesn't it?
 
Great review W1zz!! I hope you can get pCars once it comes out in April! I think a lot of people want to know what can play it! My rig can't get more than 30fps on ultra low settings at 720p usually around 15fps... I think it's the new Crysis!

So it can OC to the same lvl of a normal 980.
It runs 6c cooler than a normal 980(not a big deal 66->60)
And it costs $110 more....
I guess if you have LN2 then your set.
 
Man... the power consumption on these cards always makes me smile.


What it can do within 200w certainly isn't shabby on it's face, and probably somewhat indicative of what we'll see from 6GB GM200 parts (and 300w). Shame they'll probably split what we really want (6GB full-fat) for a 12GB lower-clocking Titan and high-clocking 5.5GB 21smm numbered Geforce, at least initially. Gotta milk that cow.

The thing I find interesting, in essence them trading shader unit alus (compared to kepler) in the pipeline for a more (mobile) cpu-like clockspeed aim, seems to have worked to an extent...but is it truly the most efficient beyond the current circumstances? Already we see the arch requiring more than optimal volts, and topping out with minimal voltage increase but substantially higher power consumption (tight tolerance, but not perfectly optimized for 28nm's properties). Obviously other factors play a roll other than unit setup/clockspeed (like the use of cache and smaller bus) as well when factoring in total power consumption.

What I'm getting at is, all things equal (both companies using HBM for larger buffer that will be required given their core capabilities, and only just-adequate cache/total bandwidth to meet the chips needs), and amd with a tightly optimized architecture (say 96CU/96 rops vs 32SMM/96S ROPs comparably within a given tdp) will a unit structure/clock like this hold up, especially on Samsung's 14nm process?

One can argue Maxwell was tailored toward the approx die size and voltage/clock disparity of tsmc 20nm (higher performance per volt, maybe 20-25% [the difference between maxwell and older 28nm chips], but lower voltage ceiling), and was then more-or-less successfully back-ported. If nvidia planned a similar route for tsmc 16nm for either a shrink or Pascal, the trick will be meaningfully less impressive on Samsung/GF 14nm, which is tailored toward density vs clockspeed (TSMC is about 10% faster, GF/Samsung is 10% smaller when comparing 16nm/14nm Early, 16nm+/14nm mature may be more similar [while closer to the 14nm Samsung methodology] as both companies search for parity to satiate customers double-sourcing).

IOW, more units and lower clockspeed is the smarter play going forward, at least initially. AMD has been geared this direction for some time (post 7000-series), while nvidia's methodology is different (in essence clockspeed vs ipc). Couple that with the greater dependance of shader resources in the ps4 (which could be seen as a 939sp core arch + extra compute in some circumstances when scaling from the xbox one, which is either shader-limited for 16 rops or could be seen as 470sp + extra compute for 8 practical rops in some cases), this setup, especially at the high-end (where it is essentially a good match for 64 ROPs, even if matched to 96 because of the larger bus size required for GDDR5 and chip size of 28nm) may not be as impressive as it currently appears.

Not saying the architecture isn't sound, or that over-all it isn't a better optimized solution for 28nm than even Fiji...just stating it's far from perfect, and the current methodology will probably not be as fruitful going forward.
 
Last edited:
"Tall card (15.5 cm)"

You specified the wrong dimension. In reality it's 1.6"/4.06 centimeters tall.
 
"Tall card (15.5 cm)"

You specified the wrong dimension. In reality it's 1.6"/4.06 centimeters tall.
Only in a tower case, where the Motherboard is vertical. ;)
Semantics really, generally the dimension you gave is regarded as the width (or thickness)
 
Yeah........... you may want to update that table that says the ROPs, Shader Units etc. The 970 certainly isn't 64 ;)
 
Thanks 4 the review!

I would just like to know if the charts for the Peak And Maximum power consumption are correct...if so ........WOW !!!
 
Speaking of WOW, I was surprised to see the R295x2 do so badly in that game, it seems like an anomaly.
I would have expected it to do at least as well as the R9 270X or better.
 
Speaking of WOW, I was surprised to see the R295x2 do so badly in that game, it seems like an anomaly.
I would have expected it to do at least as well as the R9 270X or better.

If something isn't optimised for sli or crossfire, it can punish a dual gpu - making them worse than a single card. It's the terrible trade off from a dual gpu. When they work, they fly but when they don't work - they're awful.
 
Most GTX 980's hover around the 200 watt mark in regards to maximum power consumption, so I'm guessing the Gigabyte 980 G1 is letting the team down.

http://www.techpowerup.com/reviews/Gigabyte/GeForce_GTX_980_G1_Gaming/25.html

Well that it's quite easy to explain: Vanilla gtx980 has tdp set to 185W from bios, gigabyte gaming G1 cards have tdp set to 300W for gtx980 and 250W for gtx970. So gigabyte cards let clocks keeping up with furmark, others don't. They throttle for trying to keep with bios tdp setting.

So for over clockers gigabyte cards are good choices, you won't need to fiddle with power settings that much.
 
I chuckled a bit when I saw that VRM...
 
"Tall card (15.5 cm)"

You specified the wrong dimension. In reality it's 1.6"/4.06 centimeters tall.

dimensions.png


That's been the way addon cards have been measured since the big bang.
 
Back
Top