1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA to Launch GeForce GTX 880 in September

Discussion in 'News' started by btarunr, Aug 1, 2014.

  1. Roel

    Roel

    Joined:
    May 10, 2014
    Messages:
    32 (0.03/day)
    Thanks Received:
    18
    I get tired that I keep hearing the 880 will be mid-range. It's likely going to be the fastest card available at launch which defines top-end. It doesn't matter if they could have released something faster or if the 980 will be faster next year, that's how technology works, there will always be something faster next year.
     
    Fluffmeister says thanks.
  2. micropage7

    micropage7

    Joined:
    Mar 26, 2010
    Messages:
    7,276 (2.97/day)
    Thanks Received:
    1,851
    Location:
    Jakarta, Indonesia
    when they offer better card with lower power consumption and better performance with friendly price
     
  3. MxPhenom 216

    MxPhenom 216 Corsair Fanboy

    Joined:
    Aug 31, 2010
    Messages:
    11,837 (5.17/day)
    Thanks Received:
    3,566
    Location:
    Seattle, WA
    There has been talk that Nvidia could skip 20nm all together are go to 16nm.
     
    rtwjunkie says thanks.
  4. rtwjunkie

    rtwjunkie PC Gaming Enthusiast

    Joined:
    Jul 25, 2008
    Messages:
    7,800 (2.55/day)
    Thanks Received:
    10,430
    Location:
    Louisiana -Laissez les bons temps rouler!
    You are correct, it will be the top end of the 8 series. But it's not the top-end Maxwell. GM204 is the mid-range chip for Maxwell. It's the same state of affairs that happened with the GTX 680. Most people who don't know anything about the die at all, merely bought 680's thinking that was the top of the line Kepler, unaware that Nvidia had held back that chip until the next lineup. So, Nvidia were able to perpetrate the illusion to the uninitiated that this was top-tier, when in reality, all they were buying was an unlocked 660 with good die selection.
     
  5. MxPhenom 216

    MxPhenom 216 Corsair Fanboy

    Joined:
    Aug 31, 2010
    Messages:
    11,837 (5.17/day)
    Thanks Received:
    3,566
    Location:
    Seattle, WA
    How about we all stop with calling people out on their alleged fanboyism and try have a civil conversation?
     
  6. THE_EGG

    THE_EGG

    Joined:
    Dec 15, 2011
    Messages:
    1,793 (0.99/day)
    Thanks Received:
    718
    Location:
    Brisbane QLD, Australia
    If gtx 880 pricing starts around the $750-800AUD mark here in Australia (around entry level about 780ti prices), I think it will sell well. The current and increasing flood of demand for adequate 4K performance will hopefully be enough for people to buy a new video card and hopefully drop prices a bit (after all, lower prices normally equals higher demand).

    I find it interesting that the launch date seems to get earlier and earlier as time goes on. A few months ago it was predicted the 880 would launch Q1 2015, then it was slated for a December release, then October and now September. Although it is a paper launch so as the article says, we probably won't see availability till the end of the year.
     
  7. Roel

    Roel

    Joined:
    May 10, 2014
    Messages:
    32 (0.03/day)
    Thanks Received:
    18
    Yes it's the same story as with the GTX 680. It really depends on how much you're in need for an upgrade. If you wait for the 980 then you will be stuck with a card that is slower than the 880 for another year. However if you don't want to wait that long then it's probably a better choice to get the 880 instead of going for the "old" 780ti (for those that don't have the 780ti).
     
  8. rtwjunkie

    rtwjunkie PC Gaming Enthusiast

    Joined:
    Jul 25, 2008
    Messages:
    7,800 (2.55/day)
    Thanks Received:
    10,430
    Location:
    Louisiana -Laissez les bons temps rouler!
    True, true! I agree with your assessment.
     
  9. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,662 (0.81/day)
    Thanks Received:
    188
    Location:
    So. Cal.
    I don't know but it just feels like the GM107 all over... A smaller die size and efficiency the guiding principles.

    The 880 will be above the 780 (perhaps get up into 290X range) while not encroaching on the 780Ti. If really nice Nvidia may price at $450, because this die is perhaps smaller than a GK104, while nowhere near a GK110 which Nvidia won’t/can’t sell consistently on a cards disconneted 10-15% below the $500 MSRP.

    So we all know how this goes… A "tech paper teaser" in September, mid-Oct launch with the normal reference brigade cards, while AIB customs mid-end November for the customary 10-15% charge… do the math. Other than efficiency there’s be no real justification to run to get this over the 780.

    This just provides the path in facilitating EOL of the GK104 (760/770) while increasing the margins, so they remain price effective with AMD, while permit GK110 (2304 sp) to dwindle down. I could see a them produce a cost-effective Quadro part to finish off, rather than discounting them to gamers.
     
  10. PatoRodrigues

    PatoRodrigues

    Joined:
    Oct 3, 2012
    Messages:
    229 (0.15/day)
    Thanks Received:
    59
    Location:
    Brazil
    I wonder... How much of a increase in CUDA cores and die shrink did actually matter for gamers since Kepler came out? I can't see a reason to upgrade from a 780 or a R9 290 unless it surprises the s**t out of everybody with 2x the performance of a 780 in a single GPU, with better power efficiency and +$50 in price.
     
  11. MxPhenom 216

    MxPhenom 216 Corsair Fanboy

    Joined:
    Aug 31, 2010
    Messages:
    11,837 (5.17/day)
    Thanks Received:
    3,566
    Location:
    Seattle, WA
    Now, the focus is getting powerful enough GPUs to accelerate 4k. That is going to be Nvidia and AMD's focus for a while. More memory, bandwidth and overall GPU grunt to make use of the increased memory.

    I can definitely see Nvidia going right to 16nm with the rate 20nm is going, and releasing GM210 (big die Maxwell) on 16nm. Actually, this is what I hope for mostly.
     
    rtwjunkie says thanks.
  12. rtwjunkie

    rtwjunkie PC Gaming Enthusiast

    Joined:
    Jul 25, 2008
    Messages:
    7,800 (2.55/day)
    Thanks Received:
    10,430
    Location:
    Louisiana -Laissez les bons temps rouler!
    Now THAT would be a probable big increase, and one well worth waiting for!!
     
  13. MxPhenom 216

    MxPhenom 216 Corsair Fanboy

    Joined:
    Aug 31, 2010
    Messages:
    11,837 (5.17/day)
    Thanks Received:
    3,566
    Location:
    Seattle, WA
    All dependent on TSMC really.
     
  14. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,662 (0.81/day)
    Thanks Received:
    188
    Location:
    So. Cal.
    GK106 221mm2 / 980 Cudas > GM107 148mm2 / 640 Cudas.

    By this they could reduce die 25-30% and Cuda almost 35% (if it scales the same) and should/could be still a GTX770 performance. I couldn't see them delivering anything with 3,200 CUDA cores, if anything over 2,000 Cudas or a die bigger than 300mm2, I'll be surprised and perhaps a little disillusioned.
     
    Last edited: Aug 22, 2014
  15. bpgt64

    bpgt64

    Joined:
    Oct 5, 2008
    Messages:
    1,685 (0.56/day)
    Thanks Received:
    266
    Location:
    ATL, GA
    I really want to see a single GPU solution that can drive a 4k display solo.
     
  16. Hilux SSRG

    Hilux SSRG

    Joined:
    May 1, 2012
    Messages:
    1,024 (0.61/day)
    Thanks Received:
    170
    Location:
    New Jersey, USA
    I've read such talk online but money talks at the end of the day. I don't see that happening. 16nm chips will be more expensive than 20nm, AMD/NVIDIA won't jump to squash their profit margins. Heck, both are releasing still 28nm in a few months rather than 20nm., not that they had a choice.
     
  17. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    2,786 (1.45/day)
    Thanks Received:
    1,686
    Location:
    New Zealand
    Unlikely. GPU architectures have a design lead-in time measured in years. It has been pretty much established that TSMC's process node cadence is now out of step with both AMD and Nvidia's product cycle ( 20nm late/CLN20G cancelled, 20nm FEOL+16nm BEOL ahead of schedule). Just as with TSMC's cancelled 32nm node, both vendors will likely produce a kludge - 28nm designs porting that were originally intended for 20nm/ 20+16nm.
    Then again, if the GM 204 cards are 256-bit/4GB, then it is quite possible to market GTX 880/870 and GTX 780/780Ti with the same basic performance alongside each other, especially if the 384-bit cards are aimed at high res gaming and come with a 6GB framebuffer. It wouldn't surprise me to see the 3GB GTX 780 EOL'ed, and Nvidia sanction 6GB for use with the 780 Ti
    Really?
    GTX 580 - Vendor custom boards available at launch
    GTX 680 - Vendor custom boards available at launch
    GTX 770 - Vendor custom boards available at launch
    GTX 780 - Vendor custom boards available at launch
    GTX 780 Ti - Vendor custom boards available at launch
    First Maxwell cards - Vendor custom boards available at launch

    If you're looking at historical precedent, the only cards that aren't available as vendor custom are dual GPU cards and cards not included in Nvidia's series-market segment numerical naming convention ( GTX Titan/Titan Black, Tesla, Quadro)
    You buy online from Tajikistan ?
    Gigabyte Windforce OC - same price as reference (reviewed by W1zzard on launch day)
    EVGA Superclocked ACX - $10 more than reference (1.5% more) (reviewed by W1zzard on launch day)
    So, just recapping, this card in your opinion doesn't have a market even though the specifications aren't known, the price isn't known, it's performance isn't known, it's actual entry date isn't known,
    and it's feature set isn't known, because it conflicts with a card which may or may not be EOL'ed at the time of launch (either in its entirety or as a $500 3GB iteration)
    In what world is a performance GPU only 35% larger than the same vendors low end chip ? If GM 107 is 148mm² packing 640 cores, how the **** is GM 204 supposed to pack anything close to 2000 into 200mm² ????
    I forgot, the actual mathematics are unimportant....your personal disappointment is the fact that you're trying to get across by setting an unrealistic target. Well, for my part, I'll be disillusioned if Intel's next desktop CPU doesn't have a thermal envelope of 2 watts and AMD's next flagship GPU doesn't stay under 35C under full gaming load. When you stock up on Xanax in preparation for this graphics Armageddon, grab me some.
    Depends upon whether the die shrink outweighs the wafer cost, as it usually does. 16nmFF (20nm BEOL+16nm FEOL) is supposed to bring a ~15% reduction in die size over the same design rendered by 20nm - a 15% reduction does not equate to 15% more die candidates per wafer which is dependant upon the actual die size (you could try inputting various sizes into a die-per-wafer calculator to see the variances). Latest estimates put 16nmFF at ~21% more expensive per wafer than 20nm. Even with the known parameters you would still need to factor in what kind of deal each vendor has in place regarding yield. The usual arrangement is per-wafer with guaranteed minimum yields or per viable die.
     
    Last edited: Aug 1, 2014
    Hilux SSRG and Fluffmeister say thanks.
  18. arbiter

    Joined:
    Jun 13, 2012
    Messages:
    976 (0.60/day)
    Thanks Received:
    222
    better performance and lower power consumption cost $ in R&D hence increased cost. AMD tends to be on other end of that scale, slower part but they bump clocks up to match the competition but in effect eats more power.

    Yea bought a gtx670 gigabyte windforce card when they were released ($399) and it was same price for ref card but with one best air coolers on the market. Most gaming never topped 65c even OC'ed to 1275mhz. Only a few games pushed it to 70-75c.
     
    Last edited: Aug 1, 2014
  19. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,662 (0.81/day)
    Thanks Received:
    188
    Location:
    So. Cal.
    I'm just looking at it as a GK104 replacement, in step with what they provided with Maxwell over Kepler. Isn't this what we all understand Nvidia is looking at?

    GK106 @ 221mm2 Vs GM107 @ 148mm2 = ~35% reduction
    GK104 @ 294mm2 - 30% = 205mm2
     
    Last edited: Aug 2, 2014
  20. xorbe

    xorbe

    Joined:
    Feb 14, 2012
    Messages:
    1,442 (0.82/day)
    Thanks Received:
    483
    Location:
    Bay Area, CA
    Now would that be the GTX880SE, GTX880, GTX880Ti, or GTX880Ti Black that you want? :D :laugh: :shadedshu:
     
  21. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    2,786 (1.45/day)
    Thanks Received:
    1,686
    Location:
    New Zealand
    Well, that's some deductive logic right there :rolleyes:
    GK 106 in its fully enabled form (the GTX 660) has 23% more performance than a fully enabled GM 107 (GTX 750 Ti). You expect GM 204 to be 20% slower than the GPU it is replacing in the product stack (GK 104) ???
     
  22. Prima.Vera

    Prima.Vera

    Joined:
    Sep 15, 2011
    Messages:
    3,441 (1.80/day)
    Thanks Received:
    675
    Probably in 2016 or 2017. The tech slowed down to much and prices got ridiculous higher, so my bet is 2018 for an affordable one ;)
     
  23. a_ump

    a_ump

    Joined:
    Nov 21, 2007
    Messages:
    3,681 (1.11/day)
    Thanks Received:
    399
    Location:
    Smithfield, WV
    If all those numbers are true, it would mean that Maxwell is ~15% more efficient with die space than Kepler. Course that was just with their first Maxwell chip, i'm sure(hope) they have made many more improvements with the Maxwell line after GTX 750 Ti release.

    @HumanSmoke : Very aggressive with statistics you are lol
     
  24. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    2,786 (1.45/day)
    Thanks Received:
    1,686
    Location:
    New Zealand
    The better comparison would be with GK 107 (118mm²) since it, like GM 107, features a 128-bit memory I/O. GK 106 not only is 192-bit (more of the uncore devoted to memory interfaces), but sits one rung higher in the respective GPU hierarchy.
    Yes. Regardless of the chips already known, it is a sure bet that the ratios change with incoming parts. Cache size and structure will likely play a large part in performance-per-watt, and of course, some aspects of the smaller GPU don't need scaling up for larger GPUs - the PCI-Express interface, command processor, and video encode/transcode engines are fixed size for instance.
    Well, if you stay with the facts and known (verifiable) numbers it's usually a better base to work from than pulling supposition out of thin air based on a wish list - or in some cases here, the most pessimistic scenario imaginable. The downside is that the eventual products generally conform to the laws of physics and expectation - which can be not quite as exciting I guess if you're of the "school of wild guessing" method of prediction- although that method generally leads people to be desperately disappointed (unfounded optimism for the vendor they love) to openly resentful (unfounded pessimism for the vendor they dislike). All a bit bipolar from my PoV.
     
    a_ump says thanks.
  25. jagd

    Joined:
    Jul 10, 2009
    Messages:
    467 (0.17/day)
    Thanks Received:
    89
    Location:
    TR
    Nvidia will use what available them at TSMC ( TSMC if they dont change their foundry ) there are not much foundry fabs and switching to new process ( 28nm to 20 nm to 16 etc) costing more with every step and also difficulty and problems .
    If you look to TSMC 20nm struggle youll find what i mean , nvidia can't decide to skip to 16nm for simple reasons , it will take more time , that 16nm installation to fabs look very long way , and nvidia will be stuck at 28 nm with power and price disadvantage

     
    64K says thanks.

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)