• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Nvidia GTX 970 problems: This isn't acceptable.

Status
Not open for further replies.
I still stand by this would not have been an issue if it was just a mistake in the ROPS and L2 Cache since we have seen performance numbers and those do not lie. The .5gb running extremely slow compared to the rest does pose an issue however and is the primary reason this is a problem. Not every game or benchmark is going to push 3.5gb+ Vram usage easily at least which is where this problem holds its ground as because your expecting 4gb to handle that because you bought it with 4gb. If the problem comes from exceeding 4gb, well that's not a problem for the company as the card came with it and you knew what you were getting so you knew what to expect. However if using that much Vram on the card is causing issues then you were mislead which is why people have a right to be mad about it.

It should not be labeled as the end of the world however I believe people have the right to be mad as I would be in the same situation as many are. Does it make the card a bad card though, no I do not believe so and I still think its a good card even if the specs were revised and they completely dropped the .5gb segment totally and made it a 3.5gb card. It would still be an awesome card, use a decently low amount of power, and give people a good value and still will to those who are keeping theirs/buying 1/buying more.

The thing is, this was seen and viewed by many including reviewers as a great alternative to the GTX 980, 780/780ti, R9 290, and R9 290X. Many people (Including myself) have recommended it as a great value for higher resolutions (1440p, 2160p, etc) and a better alternative in some cases to other card on the market around the same price heck I have seen many people swap out cards for these with the purpose of lower power and the ram upgrade. The reality is this upgrade was not as big as many were expecting and those people are going to feel cheated and reserve the right to be mad/get their money back. Not everyone is forced to feel that way and in no way should ANYONE tell someone else to return their card otherwise their an idiot, but the reverse is also true as you should not say someone is an idiot for returning their card because they feel cheated/are not happy.
 
Yup, ever since Fermi.

But there are some key differences to the way Fermi handled it, Kepler handled it, and Maxwell does today.

Back in the day the Fermi card (550ti) launched with a larger amount of VRAM than was generally the norm (1.5GB where 1GB was the norm for that price bracket). This also meant that games would hardly, if ever utilize the additional VRAM on that card. Basically the problem was there, but it was hidden due to 'more hardware'.

Then Kepler's 192bit bus came around. Kepler was a different beast, but STILL handled VRAM better than Maxwell's 970 does today. On Kepler's 660 and 660ti the 192 bus was divided asymmetrical which meant a 44 Gbps bandwidth dip with the last 0.5GB. This also explains how 660ti was a crippled card on 1080p when you used high levels of AA - similarly priced Radeons would beat the 660ti any day of the week on higher AA settings. The problem was again present, but hidden, because the same happened with the 680 and 256bit versus the 7970 with 384bit - but the origin of the problem is different. Where 680 was just choking because of overall bandwidth because it had a better GK104 behind it, 660ti was choking because of that lacking bandwidth on the last 0.5gb.

Now, compare Kepler and Maxwell and their bandwidth drop on the last 0.5GB. Very much apples & apples. The (much weaker) Kepler card has a 44Gbps bandwidth on the last 0.5Gb. The (over 40% faster) GPU in GM204 has that last 0.5Gb on a measly 28Gbps. Compare this to your DDR3 RAM which generally runs at 16Gbps - that is awfully close. This explains why people run into issues today, while they did not run into issues as much with Kepler or Fermi. The GAP between bandwidth variance is larger than ever before, while at the same time the memory requirements have vastly increased over time *and* the GPU itself is much more powerful.

ENBSeries' comparison of actually having only 3.5GB VRAM effective is bang on the money. Whatever part of the memory Windows resides in does not even come into play: the last 0.5GB effectively handles precisely like system RAM.

Now tell us again this is business as usual... It most certainly is not. To everyone marginalizing this issue, please go home, you haven't been paying attention. Maxwell is the iteration of GPU from Nvidia where cost cutting and memory subsystem tricks have gone too far and result in bad gaming - something which just so happens to be Nvidia's core business for this market. It is absolutely vital that we as (potential) customers draw the line here. If not for your currently owned 970, then at least do it for future generations of GPU.

Well said. Especially that last line of your post. It didn't happen with PC games and now look at some of the buggy, unfinished, chopped down games and DLC being sold later messes that some games are. People bitched about the games for a little while and then turned around and pre-ordered a game from the same publisher later.

I don't intend to return my 970 even if given the option. I'm pleased with it's performance but I certainly support anyone that does want a refund. For those of you that do want a refund take action. Fill out the petition. Send Nvidia an email. Contact the Better Business Bureau (if in USA).
 
Yeah, the painful difference between games and GPU is that for gaming, there is a great indie scene that is working steadily towards breaking that overpriced triple-A large publisher dominance, and succeeding at it (look at sales figures for games like Divinity:OS and other good indies).

But do we see ourselves crowdfunding a new GPU company? Hmm.... Maybe we will get to crowdfund AMD someday, who knows :)
 
I can probably fudge that together somehow this weekend. Not used DSR before, so I'm unsure how accurately I can force 3.4GB usage and 3.8GB usage. Starpoint Gemini II uses 3.4GB at maximum settings on 1440p, so that will be a useful example.

DSR wouln't be a good way to test since it would put the bottleneck on the pixel fillrate. Many users that were "testing" just ran DSR at insane resolutions and blamed the (expected) fps drop on the 0.5GB partition. Also, as mentioned by actual owners that experienced this, the average frame rates don't suffer so automated suites won't show up anything, reviewers would have had to stand in front of the monitor for the duration of the benchmark to notice the stuttering.
 
Frankly the whole 9 series has been nothing but trouble for me so far.
I have had problems since day 1 with them.
blackscreens, freezes, Display detection timeouts, device driver hanging.

Bad card. This is what happens when there are no ref cards, and all are OC'd from manuf by default. Lowering clocks / upping voltage solves the crashing, TDRs, etc. This is is well known for the 970 already (shades of 560 fiasco). The 3.5GB / 56 rop / 224-bit thing is just salt on the wound.
 
DSR wouln't be a good way to test since it would put the bottleneck on the pixel fillrate. Many users that were "testing" just ran DSR at insane resolutions and blamed the (expected) fps drop on the 0.5GB partition. Also, as mentioned by actual owners that experienced this, the average frame rates don't suffer so automated suites won't show up anything, reviewers would have had to stand in front of the monitor for the duration of the benchmark to notice the stuttering.

The problem isn't just that the framerates go down more than they should, but primarily the huge stuttering and frametime inconsistencies additionally. DSR is both a standard feature on these cards, and a 100% accurate way to test this issue. In reviewers' cases they usually don't, as you said, actually play/sit in front of the computer while their testing is done but rather just let it do its thing while they are not monitoring it, just taking the data after. Add in that the vast majority of reviews didn't push high resolutions and settings, and had no reason to expect there be an issue doing so because of NVidia's false advertising, and yes a lot of reviews were positive since it appeared at first glance and based on NVidia's marketing that it was a great product.

Turns out, once we owners got the cards into our systems... it simply wasn't accurate in actual usage, as my testing in the original post (1st post of this thread) shows, there are immense inconsistencies as a result in the frame timing and this creates an unplayable, juddery, and hitching gameplay experience as a result with the GTX 970 cards in these situations.

Regarding false advertising and nvidia's wrongly published specs:
GoldenTiger said:
Nvidia provides review samples to these sites (linking them here as approved reviews http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-970/reviews ) and asks them to pass spec and performance info on to customers on their behalf. Additionally they falsely advertised a full 4GB of high-speed VRAM, which they have now disclosed after being called on it, was simply not true. Add in the impact it has for gaming for many such as myself as a 2x GTX 970 SLI user in addition to the false advertising (which, while I'm not a lawyer, I'm pretty sure is illegal, no?) and they need to take responsibility now, then provide an acceptable and equitable remedy for GTX 970 owners to be in the same or better position as they were prior to buying based on false product information.

Whether this includes a full refund including pre-paid shipping costs back to the vendors, an upgrade to the GTX 980 (which would make sense considering there's no other product they manufacture that is similar but for the GTX 980 which is almost the same card bar the disabled areas that caused this whole issue and thus a refund would leave us out of cards period), etc. needs to be determined and actioned upon promptly.

Now that we have the false advertising part cleared up, let's move on to the performance and impact it has in a practical manner, as well, once more (copying the testing from my original post here):


Full-size: http://i.imgur.com/cLUvSNO.jpg
cLUvSNO.jpg


So please, stop doing what this guy is doing below:
eayJqeO.gif


They falsely advertised the number of rops, l2 cache, and the fully accessible memory capacity at advertised speeds. Only 3.5gb of the 4gb physically installed is capable of providing the proper performance levels, and dipping into that other 0.5gb segment due to their purposefully-flawed-by-design memory segmentation results in major gameplay issues as I have shown with my testing in my SLI GTX 970 configuration.
 
Deceptive Tactics of Nvidia...
 
So please, stop doing what this guy is doing below:
eayJqeO.gif

Dude, I'm like agreeing with you. I was just telling Rcoon to not use DSR since in that scenario the pixel fillrate could become the bottleneck. I also told him that average frame rates don't go down but you get micro stutter and an automated benchmark suite will never detect that. You must be present during the benchmark to catch it.
 
Yup, ever since Fermi.

But there are some key differences to the way Fermi handled it, Kepler handled it, and Maxwell does today.

Back in the day the Fermi card (550ti) launched with a larger amount of VRAM than was generally the norm (1.5GB where 1GB was the norm for that price bracket). This also meant that games would hardly, if ever utilize the additional VRAM on that card. Basically the problem was there, but it was hidden due to 'more hardware'.

Then Kepler's 192bit bus came around. Kepler was a different beast, but STILL handled VRAM better than Maxwell's 970 does today. On Kepler's 660 and 660ti the 192 bus was divided asymmetrical which meant a 44 Gbps bandwidth dip with the last 0.5GB. This also explains how 660ti was a crippled card on 1080p when you used high levels of AA - similarly priced Radeons would beat the 660ti any day of the week on higher AA settings. The problem was again present, but hidden, because the same happened with the 680 and 256bit versus the 7970 with 384bit - but the origin of the problem is different. Where 680 was just choking because of overall bandwidth because it had a better GK104 behind it, 660ti was choking because of that lacking bandwidth on the last 0.5gb.

Now, compare Kepler and Maxwell and their bandwidth drop on the last 0.5GB. Very much apples & apples. The (much weaker) Kepler card has a 44Gbps bandwidth on the last 0.5Gb. The (over 40% faster) GPU in GM204 has that last 0.5Gb on a measly 28Gbps. Compare this to your DDR3 RAM which generally runs at 16Gbps - that is awfully close. This explains why people run into issues today, while they did not run into issues as much with Kepler or Fermi. The GAP between bandwidth variance is larger than ever before, while at the same time the memory requirements have vastly increased over time *and* the GPU itself is much more powerful.

ENBSeries' comparison of actually having only 3.5GB VRAM effective is bang on the money. Whatever part of the memory Windows resides in does not even come into play: the last 0.5GB effectively handles precisely like system RAM.

Now tell us again this is business as usual... It most certainly is not. To everyone marginalizing this issue, please go home, you haven't been paying attention. Maxwell is the iteration of GPU from Nvidia where cost cutting and memory subsystem tricks have gone too far and result in bad gaming - something which just so happens to be Nvidia's core business for this market. It is absolutely vital that we as (potential) customers draw the line here. If not for your currently owned 970, then at least do it for future generations of GPU.

Thanks for the detailed explanation! I agree now with this being explained properly that this practice has gone too far. I now recant my previous statements defending NVIDIA as I now understand the issue better. I agree with your assessment, we need to stop this here. Build a card right for it's price range and modern gaming, or go home.
 
Maxwell is a good architecture, no one can deny that, but we also can't deny that NVIDIA fucked it up with moronic design decision on GTX 970...
 
You forgot to add "for now". Talk to us in 6 months time with new games... If people are already experiencing issues in a small scale, what do you think it will happen when more demanding games arrive? That things will somehow magically improve? Riiiiight...
so what card would you recomend?
which do you use ?coz am about to buy one.

You forgot to add "for now". Talk to us in 6 months time with new games... If people are already experiencing issues in a small scale, what do you think it will happen when more demanding games arrive? That things will somehow magically improve? Riiiiight...
a card that would play all games now and for 12 to 18 months to come .am sick of getting lag frames and crashes when i purchase new games
 
Last edited by a moderator:
a card that would play all games now and for 12 to 18 months to come .am sick of getting lag frames and crashes when i purchase new games

None honestly, doesn't matter if you buy a 290/290x, or a 970/980. Either of these sets of cards 12-18 months from now will run out of shader grunt likely before VRAM is the primary issue. A lot of games lately have been showing their marginal visual improvements while crashing down another 50% of our framerates at the same time, if you're always wanting to have stable 60FPS+ with placebo Ultras (I don't believe in Ultras on 95% of games on the market, have compared, almost never look better than Highs while consuming another gigabyte of ram.) You're probably going to need SLI 980s/Crossfire 290x. The smarter idea would likely be wait for the new AMD line up and hope they don't list 500w+ and 105 celsius as a feature this time around. They should likely have a better amount of grunt for 1440p+ resolutions though, that's the next step in architecture for both sides.
 
Correct, only the 3GB version is symetrical. However, when it was released, 2GB was alot of VRAM requirement. Who noticed, really? Now a single 660, even a 3Gig version, is not going to play the newest games at more than medium settings, the GPU chip itself can't do it, and a couple years old or older games, on medium-high, it's likely still to not be an issue.

Besides, IIRC, those that did discover this didn't show it as a problem, just that performance was better on the 3GB versions because it performed faster.

Same situation with the GTX 550 Ti, GTX 460 V2 (192 bit with 1GB VRAM), GTX 660 Ti, GTX 560 SE. Its not the first time NV has used this kind of arrangement and Anandtech reported on it at length. I think was made it easier to swallow was the obvious lack of symmetry between the 192 bit bus width and the 1GB or 2GB of memory. In the case of the GTX 970 it seemed symmetrical from the 256 bit/4GB specs.
 
Same situation with the GTX 550 Ti, GTX 460 V2 (192 bit with 1GB VRAM), GTX 660 Ti, GTX 560 SE. Its not the first time NV has used this kind of arrangement and Anandtech reported on it at length. I think was made it easier to swallow was the obvious lack of symmetry between the 192 bit bus width and the 1GB or 2GB of memory. In the case of the GTX 970 it seemed symmetrical from the 256 bit/4GB specs.

Please add the 660 (non-TI) 2GB to that list as well. That version has the same exact memory subsytem as the 660 Ti, albeit it was based on a different SKU for the lower shader count. Many of the defenders need to start seeing that this is a trending behavior from NVidia in order to provide cheaper products, but their marketing is probably deliberately not being kept up to date.

@Vayra86 -- Thanks so much for taking the time to paint the scope of this problem to people.
 
so what card would you recomend?
which do you use ?coz am about to buy one.

R9-290X or GTX 980. But personally, I'll wait for R9-380X. It's so close it would be stupid to buy something in rush and regret it later. Wait these 2 more months imo. That's what I'll do...
 
I guess should go without saying that how the starter of this thread did his testing is pretty much unrealistic for 99.9999% of people to use those settings to get those results. I would bet even radeon cards would have a hard time running at game at 4k+ rez with 4x MSAA as well not after stuttering issues when you turn.

I know people will flame the hell outta me for speaking of that Fact.
 
I guess should go without saying that how the starter of this thread did his testing is pretty much unrealistic for 99.9999% of people to use those settings to get those results. I would bet even radeon cards would have a hard time running at game at 4k+ rez with 4x MSAA as well not after stuttering issues when you turn.

I know people will flame the hell outta me for speaking of that Fact.
You're making a supposition not stating a fact, so your argument doesn't hold.

Also, when testing something, the whole point is to stress it to reveal weaknesses and find the limits of performance, so the fact that the product may not be used that way for most of the time is irrelevant.
 
Last edited:
The most amusing crime here is the fact I have one, thousands of people (some who don't even own one) are complaining about this very clear and factually represented defect, returning their cards for some obscure purpose, and yet for some reason I don't care in the slightest.

I know that my card is crippled, I know that if I spontaneously decide to use DSR to go from 1440p up to ultra mega super HD and then scale it down again that my card will literally soil itself and make a smutty stuttery gooey mess inside its poor little gimped memory units.

And yet, still, for some unknown reason, I don't care.

My processor looks down from its lofty 1150 socket on high, stares down at the 970 below and teases it for all it's disabled parts, fully functioning but worthless parts. Sometimes I try to calm him down, but then some random internet person who's probably not evening running a dedicated GPU of any kind signs up to some forum and starts laughing at how all these 970's are incapable of running at ultra mega big-baller HD through the use of DSR, doesn't bother looking at graphs and tests done to make comparative performance figures and actual performance, but carries on making multiple threads about something which has news articles with comments sections and already existing 100 page threads on countless forums.

Then I realise, I don't use DSR, I'm not buying 4K any time soon, have yet to pass a 4GB requirement, and my card was as cheap as dirt. If it was sold with a puzzling 3.5GB of VRAM, I'd still buy it. Maybe I'm just a terrible idea of a customer, maybe I should be more self entitled, and perhaps I should take up arms with the militia. But frankly, I can't be bothered.

If you don't (or don't plan to) use the the whole bandwidth, then this card is probably the best for you when factoring price / performance because it performs like a monster for it's price.

It's only when you try to fully utilize the bandwidth that you encounter problems: if you use (or plan to) all the bandwidth available (4K resolutions, newer games down the line, whatever else) then it's a whole different ball game.
 
Ok, first point, I'm NOT defending Nvidia. They're dicks for doing this frankly and yes, it's made me think about my next purchase (and I'll get either the next top gen AMD or NV card).

But, I've got to get this verified. In the OP this is the results at >3.6Gb usage:

sli enabled very bad frametimes - 40+ peaks over 50ms
sli disabled (i.e. single card) - 9 peaks over 50ms

But then look at this, settings under 3.5Gb:

sli enabled - 10 peaks above 50ms
sli disabled - <9 peaks above 50 ms (and better spaced out).

This graph only shows that above 3.6Gb usage, sli doesn't work. The frametimes for >3.6Gb and under 3.5 Gb have the same number of spikes above 50 ms. IN FACT, only the sli setting at 3.6+ looks terrible. single card at 3.6gb has the same frame times as single card under 3.5Gb.

Look at the lines, his dark blue line is the most stable of the lot (hovers at 30ms, with the 9 peaks) and that is single card at 3.6+. It looks better than the single card at <3.5gb.

Can we have some rationalism here? All that graph shows is that sli at 3.6gb+ is borked. Single card plays fine with expected (but not awful) fps.

Really, everyone - look at that graph again for SoM - dark blue line is better than olive coloured line (3.6gb usage is better than <3.5gb usage).

sli disabled over 3.6Gb Vram usage - blue line median and peaks at or over 50ms

1.png



sli disabled under 3.5Gb Vram usage - olive line median and peaks at or over 50ms

2.png



I'm sure there are problems but this clearly demonstrates that at over 3.6 and under 3.5 there isn't that much of a difference. ONLY sli seems screwed up.

Also, compare with this:

MordorU_3840x2160_PLOT_0.png


It's clear at 4k ULTRA on SoM (i'm sure it using more than 3.5Gb Vram here?) 970 fares worse but nothing awful.....

From the OP's own work and this graph from PC Per (using the worlds most ram heavy game, at ultra 4k settings), it's not very clear the issue being discussed is causing the problems.

Can this be discussed sensibly? I should add I'm currently having sex with JSH's daughter (unless she's under 16 - UK law - in which case I'm having sex with him, so I'm obviously not biased.).
 
I believe that's part of the issue. The card it self be it by driver forces its way back to 3.5 and does its best not to spill over to the 0.5 segment.

cod-afterbuner.jpg


That's why there people saying they don't notice much or say they are using all 4GB. Some of those burn in test posted have initial 4gb memory usage but in just the brief clips that get posted on YouTube you see them fall back to 3.5gb

AdvWarfare_2560x1440_PLOT_0.png


There is clearly something going on an it might be game texture setting oriented. A game that utilizes a lot of ram even at 1080 that going to higher res like 1440 will exhibit the issue more then a game that uses very low ram and upping the resolution does little for ram usage so it needs to be pushed to 4k. Which makes no sense because your crippled there anyways on a single. Makes sense he is using SLI at 4k.

Also PCPerspective might be using a different version then the one GT is using at different settings.




Oh and virtual bro fist bump for doing JHH daughter or him but that just means your bi not bias.
 
Last edited:
It's obvious there is a problem but it's not so much with the card but what it's been sold as. If it was sold as 3.5Gb memory then it'd be a non issue. It is as you allude to the fact that it's being bought for 4k. To be honest though, I think if you go for a full fat 4k monitor you really shouldn't scrimp on graphics. For 4k right now, I'd be buying dual 8Gb 290X's or Titan Blacks.

No matter what, Nvidia will have to do some form of damage control, they'll be trying to figure out the best way without losing face and business to AMD. Going against NV, AMD have XDMA working near perfection (when it's optimised) but NV can (and have clearly demonstrated) hobble AAA titles with Gameworks. Though looking at AC: Unity, christ, what a mess.

Roll on GM200 and Sea Islands (or whatever it's called).
 
@the54thvoid The best way NVIDIA can avoid losing business to AMD is to offer a free replacement to a GTX 980 and just take the hit, it really won't break the bank. They also need to publically apologize for this gaff unreservedly. Finally, they need to lower the price of the 970 and describe it accurately on their websites and on product boxes, or just discontinue it and make a similar product to replace it at the new, lower, price point. Somehow I think this company is far too arrogant to do any of this, however.

If AMD eat their lunch, they deserve it and I'm speaking as a pissed off NVIDIA owner not an AMD fanboy, since dodgy tactics like this never help the customer, whichever brand you prefer.
 
@the54thvoid The best way NVIDIA can avoid losing business to AMD is to offer a free replacement to a GTX 980 and just take the hit, it really won't break the bank. They also need to publically apologize for this gaff unreservedly. Finally, they need to lower the price of the 970 and describe it accurately on their websites and on product boxes, or just discontinue it and make a similar product to replace it at the new, lower, price point. Somehow I think this company is far too arrogant to do any of this, however.

If AMD eat their lunch, they deserve it and I'm speaking as a pissed off NVIDIA owner not an AMD fanboy, since dodgy tactics like this never help the customer, whichever brand you prefer.

The irony is, there are hardly any people with your kind of logic in the GTX 970 case. I'm currently an AMD user and while I often defend AMD when it deserved that, I'm not really a fanboy and will stand on the side of NVIDIA as well when it needs defending. But with so many dumb things they've done in teh past few years, they hardly ever give me that chance, thus the reason why I'm lately more on Radeon cards than on GeForce. Had my share of NVIDIA cards and also AMD/ATi cards and I was happy with all of them. Would be really pissed if I bought GTX 970 a week ago like I originally wanted...

People blame me for being a fanboy and NVIDIA hater and all, but I've actually watched their GTX 900 series launch presentation with huge enthusiasm and had a very positive thoughts on it. And then they destroyed it all with such under the hood coverups and chip hacks in order to make it look better than it really is. That's where I draw the line.
 
Cheers RejZoR. I've just realized that I may have implied that all AMD owners are fanboys, which is most certainly not the case. :oops:

Perhaps my sentence should read "...I'm speaking as a pissed off NVIDIA owner not an AMD owner or fanboy..." to make it clear. :)

I also don't see a problem with denying a company my money over tactics like these even if I think on the whole their products are better than the competition.
 
Status
Not open for further replies.
Back
Top