Friday, August 1st 2014

NVIDIA to Launch GeForce GTX 880 in September

NVIDIA is expected to unveil its next generation high-end graphics card, the GeForce GTX 880, in September 2014. The company could tease its upcoming products at Gamescom. The company is reportedly holding a huge media event in California this September, where it's widely expected to discuss high-end graphics cards based on the "Maxwell" architecture. Much like AMD's Hawaii press event that predated actual launch of its R9 290 series by several weeks; NVIDIA's event is expected to be a paper-launch of one or more graphics cards based on its GM204 silicon, with market availability expected in time for Holiday 2014 sales.

The GM204 is expected to be NVIDIA's next workhorse chip, which will be marketed as high-end in the GeForce GTX 800 series, and performance-segment in its following GTX 900 series; much like how the company milked its "Kepler" based GK104 across two series. It's expected to be built on the existing 28 nm process, although one cannot rule out an optical shrink to 20 nm later (like NVIDIA shrunk the G92 from 65 nm to 55 nm). The GTX 880 reportedly features around 3,200 CUDA cores, and 4 GB of GDDR5 memory.
Source: VideoCardz
Add your own comment

96 Comments on NVIDIA to Launch GeForce GTX 880 in September

#26
micropage7
when they offer better card with lower power consumption and better performance with friendly price
Posted on Reply
#27
MxPhenom 216
ASIC Engineer
There has been talk that Nvidia could skip 20nm all together are go to 16nm.
Posted on Reply
#28
rtwjunkie
PC Gaming Enthusiast
RoelI get tired that I keep hearing the 880 will be mid-range. It's likely going to be the fastest card available at launch which defines top-end. It doesn't matter if they could have released something faster or if the 980 will be faster next year, that's how technology works, there will always be something faster next year.
You are correct, it will be the top end of the 8 series. But it's not the top-end Maxwell. GM204 is the mid-range chip for Maxwell. It's the same state of affairs that happened with the GTX 680. Most people who don't know anything about the die at all, merely bought 680's thinking that was the top of the line Kepler, unaware that Nvidia had held back that chip until the next lineup. So, Nvidia were able to perpetrate the illusion to the uninitiated that this was top-tier, when in reality, all they were buying was an unlocked 660 with good die selection.
Posted on Reply
#29
MxPhenom 216
ASIC Engineer
theoneandonlymrkSome are very quick with the fanboy crap here, its only pr news lets not get too excited as the cards are not down yet.
I can't see a gtx880 ending up less than 500 $ but it looks like it might have reasonable performance for the price , I'm not convinced by the shader count though either , imho that would be the full bin shader count and its rare for a gpu to be built using the max bin straight away but I suppose if it's 28nm it is a mature process.
How about we all stop with calling people out on their alleged fanboyism and try have a civil conversation?
Posted on Reply
#30
THE_EGG
If gtx 880 pricing starts around the $750-800AUD mark here in Australia (around entry level about 780ti prices), I think it will sell well. The current and increasing flood of demand for adequate 4K performance will hopefully be enough for people to buy a new video card and hopefully drop prices a bit (after all, lower prices normally equals higher demand).

I find it interesting that the launch date seems to get earlier and earlier as time goes on. A few months ago it was predicted the 880 would launch Q1 2015, then it was slated for a December release, then October and now September. Although it is a paper launch so as the article says, we probably won't see availability till the end of the year.
Posted on Reply
#31
Roel
rtwjunkieYou are correct, it will be the top end of the 8 series. But it's not the top-end Maxwell. GM204 is the mid-range chip for Maxwell. It's the same state of affairs that happened with the GTX 680. Most people who don't know anything about the die at all, merely bought 680's thinking that was the top of the line Kepler, unaware that Nvidia had held back that chip until the next lineup. So, Nvidia were able to perpetrate the illusion to the uninitiated that this was top-tier, when in reality, all they were buying was an unlocked 660 with good die selection.
Yes it's the same story as with the GTX 680. It really depends on how much you're in need for an upgrade. If you wait for the 980 then you will be stuck with a card that is slower than the 880 for another year. However if you don't want to wait that long then it's probably a better choice to get the 880 instead of going for the "old" 780ti (for those that don't have the 780ti).
Posted on Reply
#32
rtwjunkie
PC Gaming Enthusiast
RoelYes it's the same story as with the GTX 680. It really depends on how much you're in need for an upgrade. If you wait for the 980 then you will be stuck with a card that is slower than the 880 for another year. However if you don't want to wait that long then it's probably a better choice to get the 880 instead of going for the "old" 780ti (for those that don't have the 780ti).
True, true! I agree with your assessment.
Posted on Reply
#33
Casecutter
I don't know but it just feels like the GM107 all over... A smaller die size and efficiency the guiding principles.

The 880 will be above the 780 (perhaps get up into 290X range) while not encroaching on the 780Ti. If really nice Nvidia may price at $450, because this die is perhaps smaller than a GK104, while nowhere near a GK110 which Nvidia won’t/can’t sell consistently on a cards disconneted 10-15% below the $500 MSRP.

So we all know how this goes… A "tech paper teaser" in September, mid-Oct launch with the normal reference brigade cards, while AIB customs mid-end November for the customary 10-15% charge… do the math. Other than efficiency there’s be no real justification to run to get this over the 780.

This just provides the path in facilitating EOL of the GK104 (760/770) while increasing the margins, so they remain price effective with AMD, while permit GK110 (2304 sp) to dwindle down. I could see a them produce a cost-effective Quadro part to finish off, rather than discounting them to gamers.
Posted on Reply
#34
PatoRodrigues
I wonder... How much of a increase in CUDA cores and die shrink did actually matter for gamers since Kepler came out? I can't see a reason to upgrade from a 780 or a R9 290 unless it surprises the s**t out of everybody with 2x the performance of a 780 in a single GPU, with better power efficiency and +$50 in price.
Posted on Reply
#35
MxPhenom 216
ASIC Engineer
PatoRodriguesI wonder... How much of a increase in CUDA cores and die shrink did actually matter for gamers since Kepler came out? I can't see a reason to upgrade from a 780 or a R9 290 unless it surprises the s**t out of everybody with 2x the performance of a 780 in a single GPU, with better power efficiency and +$50 in price.
Now, the focus is getting powerful enough GPUs to accelerate 4k. That is going to be Nvidia and AMD's focus for a while. More memory, bandwidth and overall GPU grunt to make use of the increased memory.

I can definitely see Nvidia going right to 16nm with the rate 20nm is going, and releasing GM210 (big die Maxwell) on 16nm. Actually, this is what I hope for mostly.
Posted on Reply
#36
rtwjunkie
PC Gaming Enthusiast
MxPhenom 216I can definitely see Nvidia going right to 16nm with the rate 20nm is going, and releasing GM210 (big die Maxwell) on 16nm. Actually, this is what I hope for mostly.
Now THAT would be a probable big increase, and one well worth waiting for!!
Posted on Reply
#37
MxPhenom 216
ASIC Engineer
rtwjunkieNow THAT would be a probable big increase, and one well worth waiting for!!
All dependent on TSMC really.
Posted on Reply
#38
Casecutter
GK106 221mm2 / 980 Cudas > GM107 148mm2 / 640 Cudas.

By this they could reduce die 25-30% and Cuda almost 35% (if it scales the same) and should/could be still a GTX770 performance. I couldn't see them delivering anything with 3,200 CUDA cores, if anything over 2,000 Cudas or a die bigger than 300mm2, I'll be surprised and perhaps a little disillusioned.
Posted on Reply
#39
bpgt64
I really want to see a single GPU solution that can drive a 4k display solo.
Posted on Reply
#40
Hilux SSRG
MxPhenom 216There has been talk that Nvidia could skip 20nm all together are go to 16nm.
I've read such talk online but money talks at the end of the day. I don't see that happening. 16nm chips will be more expensive than 20nm, AMD/NVIDIA won't jump to squash their profit margins. Heck, both are releasing still 28nm in a few months rather than 20nm., not that they had a choice.
Posted on Reply
#41
HumanSmoke
CasecutterI don't know but it just feels like the GM107 all over... A smaller die size and efficiency the guiding principles.
Unlikely. GPU architectures have a design lead-in time measured in years. It has been pretty much established that TSMC's process node cadence is now out of step with both AMD and Nvidia's product cycle ( 20nm late/CLN20G cancelled, 20nm FEOL+16nm BEOL ahead of schedule). Just as with TSMC's cancelled 32nm node, both vendors will likely produce a kludge - 28nm designs porting that were originally intended for 20nm/ 20+16nm.
CasecutterThe 880 will be above the 780 (perhaps get up into 290X range) while not encroaching on the 780Ti. If really nice Nvidia may price at $450, because this die is perhaps smaller than a GK104, while nowhere near a GK110 which Nvidia won’t/can’t sell consistently on a cards disconneted 10-15% below the $500 MSRP.
Then again, if the GM 204 cards are 256-bit/4GB, then it is quite possible to market GTX 880/870 and GTX 780/780Ti with the same basic performance alongside each other, especially if the 384-bit cards are aimed at high res gaming and come with a 6GB framebuffer. It wouldn't surprise me to see the 3GB GTX 780 EOL'ed, and Nvidia sanction 6GB for use with the 780 Ti
CasecutterSo we all know how this goes… A "tech paper teaser" in September, mid-Oct launch with the normal reference brigade cards, while AIB customs mid-end November
Really?
GTX 580 - Vendor custom boards available at launch
GTX 680 - Vendor custom boards available at launch
GTX 770 - Vendor custom boards available at launch
GTX 780 - Vendor custom boards available at launch
GTX 780 Ti - Vendor custom boards available at launch
First Maxwell cards - Vendor custom boards available at launch

If you're looking at historical precedent, the only cards that aren't available as vendor custom are dual GPU cards and cards not included in Nvidia's series-market segment numerical naming convention ( GTX Titan/Titan Black, Tesla, Quadro)
Casecutterfor the customary 10-15% charge… do the math
You buy online from Tajikistan ?
Gigabyte Windforce OC - same price as reference (reviewed by W1zzard on launch day)
EVGA Superclocked ACX - $10 more than reference (1.5% more) (reviewed by W1zzard on launch day)
CasecutterOther than efficiency there’s be no real justification to run to get this over the 780.
So, just recapping, this card in your opinion doesn't have a market even though the specifications aren't known, the price isn't known, it's performance isn't known, it's actual entry date isn't known,
and it's feature set isn't known, because it conflicts with a card which may or may not be EOL'ed at the time of launch (either in its entirety or as a $500 3GB iteration)
CasecutterI couldn't see them delivering anything with 3,200 CUDA cores, if anything over 2,000 Cudas or a die bigger than 200mm2, I'll be surprised and perhaps a little disillusioned.
In what world is a performance GPU only 35% larger than the same vendors low end chip ? If GM 107 is 148mm² packing 640 cores, how the **** is GM 204 supposed to pack anything close to 2000 into 200mm² ????
I forgot, the actual mathematics are unimportant....your personal disappointment is the fact that you're trying to get across by setting an unrealistic target. Well, for my part, I'll be disillusioned if Intel's next desktop CPU doesn't have a thermal envelope of 2 watts and AMD's next flagship GPU doesn't stay under 35C under full gaming load. When you stock up on Xanax in preparation for this graphics Armageddon, grab me some.
Hilux SSRGI've read such talk online but money talks at the end of the day. I don't see that happening. 16nm chips will be more expensive than 20nm, AMD/NVIDIA won't jump to squash their profit margins. Heck, both are releasing still 28nm in a few months rather than 20nm., not that they had a choice.
Depends upon whether the die shrink outweighs the wafer cost, as it usually does. 16nmFF (20nm BEOL+16nm FEOL) is supposed to bring a ~15% reduction in die size over the same design rendered by 20nm - a 15% reduction does not equate to 15% more die candidates per wafer which is dependant upon the actual die size (you could try inputting various sizes into a die-per-wafer calculatorto see the variances).Latest estimates put 16nmFF at ~21% more expensive per wafer than 20nm. Even with the known parameters you would still need to factor in what kind of deal each vendor has in place regarding yield. The usual arrangement is per-wafer with guaranteed minimum yields or per viable die.
Posted on Reply
#42
arbiter
micropage7when they offer better card with lower power consumption and better performance with friendly price
better performance and lower power consumption cost $ in R&D hence increased cost. AMD tends to be on other end of that scale, slower part but they bump clocks up to match the competition but in effect eats more power.
HumanSmokeYou buy online from Tajikistan ?
Gigabyte Windforce OC - same price as reference (reviewed by W1zzard on launch day)
EVGA Superclocked ACX - $10 more than reference (1.5% more) (reviewed by W1zzard on launch day)
Yea bought a gtx670 gigabyte windforce card when they were released ($399) and it was same price for ref card but with one best air coolers on the market. Most gaming never topped 65c even OC'ed to 1275mhz. Only a few games pushed it to 70-75c.
Posted on Reply
#43
Casecutter
I'm just looking at it as a GK104 replacement, in step with what they provided with Maxwell over Kepler. Isn't this what we all understand Nvidia is looking at?

GK106 @ 221mm2 Vs GM107 @ 148mm2 = ~35% reduction
GK104 @ 294mm2 - 30% = 205mm2
Posted on Reply
#44
xorbe
RejZoRWell, i'm not asking for Titan. Just a high end card (GTX880). Not interested in top end (Titan).
Now would that be the GTX880SE, GTX880, GTX880Ti, or GTX880Ti Black that you want? :D :laugh: :shadedshu:
Posted on Reply
#45
HumanSmoke
CasecutterI'm just looking at it as a GK104 replacement, in step with what they provided with Maxwell over Kepler. Isn't this what we all understand Nvidia is looking at?
GK106 @ 221mm2 Vs GM107 @ 148mm2 = ~35% reduction
GK104 @ 294mm2 - 30% = 205mm2
Well, that's some deductive logic right there :rolleyes:
GK 106 in its fully enabled form (the GTX 660) has 23% more performancethan a fully enabled GM 107 (GTX 750 Ti). You expect GM 204 to be 20% slower than the GPU it is replacing in the product stack (GK 104) ???
Posted on Reply
#46
Prima.Vera
bpgt64I really want to see a single GPU solution that can drive a 4k display solo.
Probably in 2016 or 2017. The tech slowed down to much and prices got ridiculous higher, so my bet is 2018 for an affordable one ;)
Posted on Reply
#47
a_ump
HumanSmokeWell, that's some deductive logic right there :rolleyes:
GK 106 in its fully enabled form (the GTX 660) has 23% more performancethan a fully enabled GM 107 (GTX 750 Ti). You expect GM 204 to be 20% slower than the GPU it is replacing in the product stack (GK 104) ???
If all those numbers are true, it would mean that Maxwell is ~15% more efficient with die space than Kepler. Course that was just with their first Maxwell chip, i'm sure(hope) they have made many more improvements with the Maxwell line after GTX 750 Ti release.

@HumanSmoke : Very aggressive with statistics you are lol
Posted on Reply
#48
HumanSmoke
a_umpIf all those numbers are true, it would mean that Maxwell is ~15% more efficient with die space than Kepler.
The better comparison would be with GK 107 (118mm²) since it, like GM 107, features a 128-bit memory I/O. GK 106 not only is 192-bit (more of the uncore devoted to memory interfaces), but sits one rung higher in the respective GPU hierarchy.
a_umpCourse that was just with their first Maxwell chip, i'm sure(hope) they have made many more improvements with the Maxwell line after GTX 750 Ti release.
Yes. Regardless of the chips already known, it is a sure bet that the ratios change with incoming parts. Cache size and structure will likely play a large part in performance-per-watt, and of course, some aspects of the smaller GPU don't need scaling up for larger GPUs - the PCI-Express interface, command processor, and video encode/transcode engines are fixed size for instance.
a_ump@HumanSmoke : Very aggressive with statistics you are lol
Well, if you stay with the facts and known (verifiable) numbers it's usually a better base to work from than pulling supposition out of thin air based on a wish list - or in some cases here, the most pessimistic scenario imaginable. The downside is that the eventual products generally conform to the laws of physics and expectation - which can be not quite as exciting I guess if you're of the "school of wild guessing" method of prediction- although that method generally leads people to be desperately disappointed (unfounded optimism for the vendor they love) to openly resentful (unfounded pessimism for the vendor they dislike). All a bit bipolar from my PoV.
Posted on Reply
#49
jagd
Nvidia will use what available them at TSMC ( TSMC if they dont change their foundry ) there are not much foundry fabs and switching to new process ( 28nm to 20 nm to 16 etc) costing more with every step and also difficulty and problems .
If you look to TSMC 20nm struggle youll find what i mean , nvidia can't decide to skip to 16nm for simple reasons , it will take more time , that 16nm installation to fabs look very long way , and nvidia will be stuck at 28 nm with power and price disadvantage
MxPhenom 216There has been talk that Nvidia could skip 20nm all together are go to 16nm.
Posted on Reply
#50
64K
jagdNvidia will use what available them at TSMC ( TSMC if they dont change their foundry ) there are not much foundry fabs and switching to new process ( 28nm to 20 nm to 16 etc) costing more with every step and also difficulty and problems .
If you look to TSMC 20nm struggle youll find what i mean , nvidia can't decide to skip to 16nm for simple reasons , it will take more time , that 16nm installation to fabs look very long way , and nvidia will be stuck at 28 nm with power and price disadvantage
Good points. I can't see Nvidia being able to stretch the 28nm Maxwells out for another year or more after the release of the GTX 880 either and the engineering for the 20nm Maxwell has already been paid for by Nvidia so they will probably want to release a line of 20nm Maxwells to recoup their investment.
Posted on Reply
Add your own comment
May 9th, 2024 17:25 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts