• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA to Tune GTX 970 Resource Allocation with Driver Update

The problem that I think many non-owners can't seem to grasp (and as shown in the videos kindly posted above) is not that general performance hasn't changed, as you're right it doesn't change the performance shown at launch. The problem is the transition from the 3.5GB to .5GB segment causes stutter. This is very real and it is extremely annoying. This was not showcased/highlighted in many (any?) reviews, perhaps as they didn't think to look for it or saw any potential hiccups as some other personal anomaly. Maybe most tested it at resolutions that could be contained within 3.5GB (again, this is a great 1080p->1440p card as it is), or scenarios the core was bottlenecked before vram became the bottleneck. The fact remains, there are scenarios where the core can put up with gaming scenarios that would utilize that partition for a fluid experience (in essence I disagree with many that say it is moot because it can't). There are instances where the bottleneck is that .5GB, or rather switching to it causes stutter (ie resolutions/settings in Mordor that would otherwise run solidly above 30fps, I'm sure there are others) and that is a problem, especially because we were lied to about it's capabilities. Had we known about that, it may have caused some people to buy a 290x, as at higher resolutions (while otherwise a similar-performing core) the AMD cards will not have this problem. I have said it about 37 times in this thread: no 290(x) would fit in my case; the 970 is the best option for me regardless. That doesn't change the fact the stutter is annoying.

I've been playing FC4@1440p MSAA4 since I got my GTX970(and on my 4GB 670s before that). Memory usage is often over 3.7GB, the stuttering really isn't bad, or even noticeable. The odd thing is those videos show the GPU usage drop to 0% when the stuttering happens, and that doesn't happen with my card. The GPU usage is pegged at 100% always.

Plus there were plenty of opportunities where this should have come up in the reviews. W1z did a lot of testing at 4k with the card both single card and SLI. You'd think he would have mentioned the stuttering instead of praising the card as a great card for 4k. He even tested BF4 and Watch_Dogs at 4k, both of which I know use more than 3.5GB.
 
Last edited:
Plus there were plenty of opportunities where this should have come up in the reviews. W1z did a lot of testing at 4k with the card both single card and SLI. You'd think he would have mentioned the stuttering instead of praising the card as a great card for 4k. He even tested BF4 and Watch_Dogs at 4k, both of which I know use more than 3.5GB.

Might just be a bechmarks suite he lets run and gets FPS results since I don't recall W1zzard ever commenting about playability experience in his reviews. Maybe outside of his reviews from personal experience but he hasn't commented has he ?
 
I've never experienced the so called Radeon black screen hardlock *knocks wood* but that doesn't mean every other guy that has had that problem is lying or delusional. Not everyone could be experiencing a problem even if their setups are similar.
 
I've been playing FC4@1440p MSAA4 since I got my GTX970(and on my 4GB 670s before that). Memory usage is often over 3.7GB, the stuttering really isn't bad, or even noticeable. The odd thing is those videos show the GPU usage drop to 0% when the stuttering happens, and that doesn't happen with my card. The GPU usage is pegged at 100% always.

Plus there were plenty of opportunities where this should have come up in the reviews. W1z did a lot of testing at 4k with the card both single card and SLI. You'd think he would have mentioned the stuttering instead of praising the card as a great card for 4k. He even tested BF4 and Watch_Dogs at 4k, both of which I know use more than 3.5GB.
HardOCP did some pretty intensive 4K benchmarks using SLI at max playable settings, and also didn't really find that much discrepancy in playability, and they pegged the 970 setup between the 290X and 290 Crossfire. Techspot also did 4K testing with SLI. Funnily enough I mentioned the lack of texture fill vs the 980 in the comments (as dividebyzero, post #2).
It is definitely going to come down to games/image quality on a case by case basis
 
I've been playing FC4@1440p MSAA4 since I got my GTX970(and on my 4GB 670s before that). Memory usage is often over 3.7GB, the stuttering really isn't bad, or even noticeable. The odd thing is those videos show the GPU usage drop to 0% when the stuttering happens, and that doesn't happen with my card. The GPU usage is pegged at 100% always.

Plus there were plenty of opportunities where this should have come up in the reviews. W1z did a lot of testing at 4k with the card both single card and SLI. You'd think he would have mentioned the stuttering instead of praising the card as a great card for 4k. He even tested BF4 and Watch_Dogs at 4k, both of which I know use more than 3.5GB.

who ever says spending all that money on a 4k gaming rig is on crack and didnt test enough games.. hell no 4gb is not enough.. I dont play games with no low standards.. If I spend thousands of dollars I dont want just high settings with busted minimum frames.
I have two 1080p monitors.. 60hz-144hz and there is no going back to a lower refresh rate for me just so I can have a pixel density that matters at like what 40in or more.. more like a tv.
 
bohaha most ever funny video about 970 vram
 
I just saw this on twitter, not sure what to make of it.

Capture.png
 
I truly wish large corporations realized a little a honesty/culpability can go a long way towards customer loyalty.
This is an unfortunately ebb-and-flow at companies, especially those corporations that are compelled to demonstrate qtr-qtr gains.

AMD seems to have treaded discreetly and certainly shouldn't be seen as "piling-on"... even that "4 GB means 4 GB" is too much. They should know dang well (as any smart company knows) this kind of "Doh" moment could be just around the corner, while they don't want to see their past digressions dredged-up in such conversations.

Honestly, Dave Baumann (and not finding for sure he’s still with AMD) comment was perhaps more that companies don't have to tell us or right to know saying, "Fundamental interconnects within a GPU are not the parts that are ever discussed, because largely they aren't necessary to know about; additionally ASIC "harvesting". In and of itself he’s right, as long as specifications presented are correct and/or the information provide isn't a pretense for concealling such weaknesses. It's was reckless in this case, because this was something that consumers might encounter as he said, "understandable that this would be "discovered" by end users."

Any company especially at such a level must maintain an ethical rapport, not just for the end-user customer, but for their overall long-term health in other segments. As it might have an adverse effect on OE's consideration for engineered solution provider, and professional markets.
 
This is an unfortunately ebb-and-flow at companies, especially those corporations that are compelled to demonstrate qtr-qtr gains.
AMD seems to have treaded discreetly and certainly shouldn't be seen as "piling-on"... even that "4 GB means 4 GB" is too much. They should know dang well (as any smart company knows) this kind of "Doh" moment could be just around the corner, while they don't want to see their past digressions dredged-up in such conversations.
That's kind of what I was alluding to earlier. Not sure whether if its budget cuts/R&D trimming, or just the effort needed to get the console APU parts to market, but AMD are starting to fall behind in some of the very time sensitive markets they've targeted. As an example (there are others but I won't spoil the need to play tech detective), AMD's push into ARM servers - the reason they acquired SeaMicro- seems to be leading to a climb down from earlier lofty claims. Remember that Seattle (Opteron A1100 series) was due in the second half of 2014 fully wired for SeaMicro's Freedom Fabric interconnect? A few months later and Freedom Fabric was quietly dumped from at least the first generation, and while the development kits have been around since mid-2014, Seattle is for the most part still MIA - delayed (according to AMD) because of a lack of software support.
Honestly, Dave Baumann (and not finding for sure he’s still with AMD) comment was perhaps more that companies don't have to tell us or right to know saying, "Fundamental interconnects within a GPU are not the parts that are ever discussed, because largely they aren't necessary to know about; additionally ASIC "harvesting". In and of itself he’s right, as long as specifications presented are correct and/or the information provide isn't a pretense for concealling such weaknesses. It's was reckless in this case, because this was something that consumers might encounter as he said, "understandable that this would be "discovered" by end users."
I think Dave was alluding to the sensitivity of the information to other vendors (AMD specifically in this case) as well as the mainstream user base, because widely publicizing the information would allow AMD an insight into Nvidia's binning strategy. If the dies/defects per wafer and wafer cost are known, it becomes a relatively easy task to estimate yields of any ASIC. To use the previous example, AMD are similarly tight-lipped about Seattle's cache coherency network protocol, even though it is supposedly a shipping product. The problem with tech is that that industrial secrecy has a tendency to spill over into the consumer arena - some more disastrously than others, where it invariably comes to light because it is in the nature of tech enthusiasts to tinker and experiment ( as example- albeit very minor in the greater scheme of things; it wasn't AMD that alerted the community that their APUs perform worse with single rank memory DIMMs)
Any company especially at such a level must maintain an ethical rapport, not just for the end-user customer, but for their overall long-term health in other segments. As it might have an adverse effect on OE's consideration for engineered solution provider, and professional markets.
Agreed, but I think the ethical relationship between vendor and OEM/ODM only extends as far as it costing either of them money. Hardware components have such a quick product cycle that individual issues - even major ones like Nvidia's eutectic underfill problem, tend to pass from the greater consumer consciousness fairly quickly. I would hazard a guess, and say that 90% or more of consumer computer electronics buyers couldn't tell you anything substantive about the issue, or any of the others that have befallen vendors (FDIV, f00f, TLB, Enduro, GSoD, Cougar Point SATA, AMD Southbridge I/O and god knows how many others). What does stick in the public consciousness are patterns (repeat offending), so for Nvidia's sake (and any other vendor caught in the same mire) it has to become a lesson learned - and nothing makes a vendor take notice quicker than a substantial hit to the pocketbook.
 
Basically they'll make it go over 3,5GB even rarely than it does now...
 
So basically G-sync is like FreeSync, just that nVidia developed a module that enabled DP1.2a features on non DP1.2a displays??? Judging by the article that seems to be the case.
Seems to be, which would make sense since AMD didn't request the Adaptive Sync addition to the DisplayPort spec until after G-Sync launched.
 
We will eventually discover that Nvidia sink method is different than DP 1.2a+ and why it disables audio.
 
Back
Top