• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce GTX 880 and GTX 870 to Launch This Q4

will it run Crysis in 4K? if the answer is "no" - why should we bother and even talk about this useless hardware. if the answer is "yes" - then shut up and take my money

Just like "Mom" jokes, the Crysis ones also getting old.

And btw Crysis is a turd that can't be polished.
 
Just like "Mom" jokes, the Crysis ones also getting old.

And btw Crysis is a turd that can't be polished.
Mom jokes are never getting old. but Crysis question is still actual - because if this card can not run newst titles (I do not give a damn about Crysis) on 4K - then why someone should upgraide from like HD 5870 or GTX 580 like cards?
 
I should clarify my point. I was making my comment based upon NVidia's own press slide showing the transition to cost-effective 20nm occurring in Q1 2015.
Fair enough, although I don't personally put much stock in a vendors slides, especially when it's an Nvidia/TSMC thing - they're like some masochistic couple - the Richard Burton and Elizabeth Taylor of the semicon world. I'd also note that some of the numbers for those projections have changed, or at least been made public, since the TSMC vs Intel transistor density spat earlier in the year.
The difference in cost per transistor between 20nm and 28nm is minimal, making me question whether it's worth putting engineering effort toward shrinking GPUs for a marginal cost savings per GPU (that may never make up the capital expenditure to make new masks and troubleshoot issues) rather than concentrating engineering on completely new GPUs at that smaller process. Unlike in the past, there's a lot more to be gained from a newer, more efficient architecture than from a die shrink.
The whole deal with 20/16nm, like (almost) any new process is the ability to dial in either a lower power budget or higher clocks. Lower power isn't a "must have" for the high end GPU market judging by recent events, and higher clocks mean higher localized temps in a more densely packed die which is a bit of a compromise on a large die (it's problematic enough on the small die Ivy Bridge/Haswell). If the current architecture can't fully utilise the high clock resources without pipeline bottlenecks is it worthwhile moving to a smaller node? - which is what I meant by "Would the GPU design benefit from, or require increased transistor density over increased GPU silicon cost for the given price points of the product being sold?". How much real world performance is gained vs overclock for the 750 Ti for example (percentage to percentage). W1zzard measured the difference as 14% more performance from a 18.2 - 22.7% clock boost between a stock 750 Ti (980-1150 core), and an OC'ed card (1202-1359 core), so there is a point where the higher clocks don't earn their keep
 
Mom jokes are never getting old. but Crysis question is still actual - because if this card can not run newst titles (I do not give a damn about Crysis) on 4K - then why someone should upgraide from like HD 5870 or GTX 580 like cards?

The question should not be "can it run the newest games" because a graphics card from 10 years ago can do that. The question should be "can it run the newest games faster or at better quality" to which the answer is a definite yes.

There is a correlation between the power of GPUs and the complexity of games. If you're waiting to buy a GPU that can play the latest games at highest settings then you will never buy one because the latest games will always be setting the bar higher.
 
Last edited:
Mom jokes are never getting old. but Crysis question is still actual - because if this card can not run newst titles (I do not give a damn about Crysis) on 4K - then why someone should upgraide from like HD 5870 or GTX 580 like cards?

VRAM usage is only going to go up. I upgraded from my 3 x 570's because 1280MB of VRAM wasn't enough to run Max Payne 3 at ultra settings on 1080p.
 
I don't understand why people think a 256-bit/32 ROP chip is going to have something like 3200sp. That makes absolutely no sense.

I know, right? It made sense for GTX 560 Ti (GF114) to have 384sp, and 256-bit/32 ROP. A chip with four times the cores (1536sp) with 256-bit/32 ROP is so totally unimaginable! NVIDIA would never make such a chip. :rolleyes:

Oh wait...it did. The GK104.:slap:

Ermagerd...3200 SP and 256-bit/32 ROP? Totally unimaginable and borderline blasphemous!
 
Dude, you have so much to learn about computer software, I don't even know where you should start...

I really would like to know where that came from because it does not seem to be relevant in any way to what the guy stated.
 
Just like "Mom" jokes, the Crysis ones also getting old.

And btw Crysis is a turd that can't be polished.

Crysis can be seen as the Crysis series and as of yet they are still the heavier games out there so using them as a standard for the next gen to overcome is not such an odd request.
Also nobody was talking about the quality of the games.
 
People still obsessed with stupid power consumption. Its like buying a Ferrari and then driving around at 50km/h to conserve petrol. Or worse, driving a Ferrari and constantly bitch about MPG. Idiotic. Give me cheaper top performing card and i don't give a toss about consumption.
today's top performing cards are limited by power consumption = heat output. so by your analogy, today's ferrari's will always be limited by the rpm limiter of the engine, which can only be improved if you improve engine tech, so you can relax the rpm limit
 
Not really. RPM is not everything. If you increase the engine capacity, you don't need as many RPM to compensate for that. Besides, if it was really a thermal issue, all the GPU's would use water cooling as stock cooling solution. But they still use crappy little coolers. So there is plenty of headroom...
 
Not really. RPM is not everything. If you increase the engine capacity, you don't need as many RPM to compensate for that. Besides, if it was really a thermal issue, all the GPU's would use water cooling as stock cooling solution. But they still use crappy little coolers. So there is plenty of headroom...

Less power initially used means not only a smaller thermal envelope, but also increases the prospective overclock, as you have more power available to you before the thing pops, and more power available to use for OC'ing before you start needing watercooling.

That and I prefer companies at least trying to not destroy our planet for the sake of bigger numbers. Granted, the concept of high end GPU's still destroy the planet, but at least it's a dent.

I'm glad you don't care about power consumption. BUT MOST OF THE REST OF US DO.
 
People still obsessed with stupid power consumption. Its like buying a Ferrari and then driving around at 50km/h to conserve petrol. Or worse, driving a Ferrari and constantly bitch about MPG. Idiotic. Give me cheaper top performing card and i don't give a toss about consumption.

You don't care about power consumption, but you do care about price? You realise electricity costs money? I care about consumption insofar as the money saved on electricity will allow me to get a better card.
 
People who buy 2x 780ti do not care about cost of electric.
 
People who buy 2x 780ti do not care about cost of electric.

And you know this because you asked every person who owns such a config right?
*Removed (language)*
If you are just browsing the internet or any of those other stuff you can and will do on a pc other then gaming which barely require a gpu of any kind, you dont want your pc to use up tons of electricity.
 
Last edited by a moderator:
So, it seems to me this is just an incremental upgrade on current gen, with the addition of much better energy efficiency. That means the really big increase in performance will come with the 9-series, right?
 
So, it seems to me this is just an incremental upgrade on current gen, with the addition of much better energy efficiency. That means the really big increase in performance will come with the 9-series, right?

its taking waaay too long for someone who wants to upgrade but is waiting for a proper power upgrade ><
 
its taking waaay too long for someone who wants to upgrade but is waiting for a proper power upgrade ><

Pretty much like the processor market. People who are still on Sandybridge STILL have no reason at all to upgrade from their stupendously overclocked processors to the new Devil's Canyon chips, besides a small performance increase which isn't necessarily needed. GPU market is stagnating and no real performance improvement is coming because we're stuck on the 28nm process and everything is just a rebranded architecture.
 
Personally I'm much more excited to see what AMD's next cards can do, since they'll be on GF 28nm instead of TSMC.

Oh, and the specs listed here are completely inaccurate / made up. No way a card with that many shaders could function on a 256bit memory bus.
 
People still obsessed with stupid power consumption. Its like buying a Ferrari and then driving around at 50km/h to conserve petrol. Or worse, driving a Ferrari and constantly bitch about MPG. Idiotic. Give me cheaper top performing card and i don't give a toss about consumption.

That's what I thought too, until I got 2 AMD 290s. Its the first time I've actually ever heard my 1000W's fan, and it was really really loud....so much for SilentPro.
 
Pretty much like the processor market. People who are still on Sandybridge STILL have no reason at all to upgrade from their stupendously overclocked processors to the new Devil's Canyon chips, besides a small performance increase which isn't necessarily needed. GPU market is stagnating and no real performance improvement is coming because we're stuck on the 28nm process and everything is just a rebranded architecture.

Exactly! Other than for more VRAM, I see really zero incentive to upgrade from my 780 until the 9-series. The VRAM is gonna be the killer, so I'll probably stay at 1080P so I can still maximize visuals.

Now, my fiance's rig (Frankenrig, below) on the other hand, probably could see some really good improvement with the 8's going from a 660Ti.
 
Power consumption and heat arguments typically also hit noise arguments. Some people only care about the latter. So sure you can afford to pay for the power used and don't mind dropping the A/C down a few degrees to counteract the heat being added to the room, but if that thing Idles above 45dba it's going to be annoying. So even those with unlimited funds and "gimme more powa" mentality will still want a quieter rig when they're just browsing the net.

For me After 4 years of Fermi SLI I'm ready for something with less of all the above. For me I'm also interested in semi portability so a gaming laptop is where it's at.

But say you've been sitting on 580 sli and the rest of your rig runs fine. I can see an 880/870 that runs at less than half the power while offering more performance being a very attractive solution.
 
I have to disagree with you here. 20nm isn't going to be less expensive than 28nm per transistor, so there's no financial incentive for a die shrink and thus it won't be done. It makes more financial sense to sell a large 28nm chip than a smaller 20nm chip.

20nm will only be for the extreme high end this generation and will only be used in cases where it's impossible to manufacture a larger 28nm chip (e.g. you can't make a 28nm, 15 billion transistor, 1100mm^2 GM100). 20nm won't become mainstream until NVidia (or anyone else) can't achieve their performance targets on 28nm, which likely will not happen until the generation after this.

Cost is just one variable. Being ahead of competition by providing cooler, less power hungry and faster chip is another one because clients will be more likely to buy from you if competition is not up to par. If that were not the case companies would not invest in fabs at all. Competition is magic ii all of this.
 
3200 cores and 256-bit mem bus? Seems unlikely.
 
You don't care about power consumption, but you do care about price? You realise electricity costs money? I care about consumption insofar as the money saved on electricity will allow me to get a better card.

Sorry but i call that BS. For the price of your power used, converted to € you'll benefit like 10 EUR difference a year compared to me.
You don't care about power consumption, but you do care about price? You realise electricity costs money? I care about consumption insofar as the money saved on electricity will allow me to get a better card.

With the price difference you'll save with your more "power efficient" graphic card you'd save about 10 € a year. And i'm being very optimistic here. Will you really have it for 10 years that you'll save up 100 € and buy a better graphic card because of it? Sure power efficiency is nice if it just happens to be cheap. Otherwise everyone charge such ridiculous premiums for it you never get that back with usage alone. Not the primary usage graphic cards were intended for. And that's gaming. Bitcoin mining is something completely different and i'll still stand by my statement that ALL bitcoin mining is a total and entire waste of worlds resources. Grinding some pointless algorithms, burn electricity for them so you can spend them as real money. A single sided non productive manufacturing of world goods. And only one who actually made a profit out of it were graphic card makers and no one else.
 
This rumored specs just seem to be as stated rumor because as seen the new Maxwell is about efficiency and better performance per cuda core instead of just cramming more onto the die. I would find it hard to believe that after what 4 generations of cards they would drop the bus width down to 256. To much of that seems like its just a wish list that's mixed up. Now maybe these specs are 100% accurate and we all are just blabbering but this just seems to be very suspicious because it would not follow the route GM seemed to be going.

Now as far as the shrinking of the die obsession, I really do not think thats a big deal. I would rather wait on better stability than just shrink for the whole idea of shrinking. GM already proved with the 750ti that we can use less power and cores yet achieve better performance on the 28nm die.

As for the power consumption debate, having lower power consumption is something to strive for since we have started to go a little crazy in that area. However the differences your talking about in most cases amount to zip/zilch/nada in a power bill even in some of the more expensive regions. With thigns like zero core or the likes if the computer is idle then its not using much power and even under gaming or 100% load the power consumption is not outrageous even on the craziest of machines. The amount of difference things would make on a power bill to amount to anything really under general use would take years to make up an amount that would look like actual savings. So unless your running your computer 24/7 under load, power consumption for money savings is a meh topic.
 
Back
Top