1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GM107 Features 128 CUDA Cores Per Streaming Multiprocessor

Discussion in 'News' started by btarunr, Feb 13, 2014.

  1. Recus

    Recus

    Joined:
    Jul 10, 2011
    Messages:
    528 (0.44/day)
    Thanks Received:
    179
    Nvidia is always achieve more with less Cuda cores since it was introduced:
    8800 GTX/128 - HD 2900/320
    GTX 285/240 - HD 4890/800
    GTX 480/480 - HD 5870/1600
    GTX 580/512 - HD 6970/1536
    GTX 680/1536 - HD 7970/2048
    GTX 780/2304-2880 - R9 290X/2816

    Now both companies tied and Nvidia optimize their cores again. One day AMD will realize that something is not right with 10000 SP, 700 TDP, 199,9°C.
     
  2. blanarahul

    blanarahul

    Joined:
    Dec 17, 2011
    Messages:
    116 (0.11/day)
    Thanks Received:
    7
    GM107 has 2 MegaByte cache?? I find that extremely hard to believe. It must be 2 Megabit or 256 KiloByte. It may be 512 KiloByte at maximum. But 2 MegaByte?? I'll believe it when I see it.
     
  3. blanarahul

    blanarahul

    Joined:
    Dec 17, 2011
    Messages:
    116 (0.11/day)
    Thanks Received:
    7
    You can't compare pre-Kepler CUDA cores with AMD GPUs. They had a shader clock remember??

    Secondly, pre-GCN AMD cores were much simpler than CUDA cores.
     
  4. Prima.Vera

    Prima.Vera

    Joined:
    Sep 15, 2011
    Messages:
    2,251 (1.98/day)
    Thanks Received:
    293
    Sorry, but you can NOT compare CUDA cores and AMD's GPU cores. They have totally different architecture, design and approach.
     
  5. Xzibit

    Joined:
    Apr 30, 2012
    Messages:
    1,121 (1.23/day)
    Thanks Received:
    252
    :laugh:

    He thinks AMD uses CUDA cores too. :shadedshu: :banghead:

    :roll:

    :lovetpu:
     
  6. Recus

    Recus

    Joined:
    Jul 10, 2011
    Messages:
    528 (0.44/day)
    Thanks Received:
    179
    I was trying to explain, badtaylorx, why Maxwell doesn't look weak to me but it turns into butthurt to someone.

    Where did I said that? If AMD use Cuda cores I would say "10000 Cuda cores, 700 TDP, 199,9°C" but I said SP (Stream processors).
     
  7. BiggieShady

    BiggieShady

    Joined:
    Feb 8, 2012
    Messages:
    1,010 (1.02/day)
    Thanks Received:
    346
    Location:
    Zagreb, Croatia
    It's L2 cache so it is a single memory pool for the whole GPU and because new shader modules are divided in 4 subblocks each with L1 cache (light blue color on diagram) it's not so far fetched that L2 needed to be quadrupled from 512 KB to 2 MB ... but, I'll give you that, it is much more likely for GM200 rather than for GM107. Wait and see, I guess.
     
  8. blanarahul

    blanarahul

    Joined:
    Dec 17, 2011
    Messages:
    116 (0.11/day)
    Thanks Received:
    7
    This is the only part of your comment that I could comprehend. Can you please elaborate?
     
  9. BiggieShady

    BiggieShady

    Joined:
    Feb 8, 2012
    Messages:
    1,010 (1.02/day)
    Thanks Received:
    346
    Location:
    Zagreb, Croatia
    When you have multiple core architecture (GPU is extreme example of that) than L1 and L2 cache memory has to be organized hierarchically (multiple level memory paging algorithms etc.)
    NVidia changed how SMM is internally subdivided on 4 blocks each with L1 cache (Kepler has simpler L1 cache organization per SMX) so larger L2 cache is needed to accommodate changes in the cache hierarchy.
     
  10. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,167 (0.91/day)
    Thanks Received:
    87
    Location:
    So. Cal.
    Nvidia has to get really efficient at this level or they're just going to lose all that new entry/low-end computer and laptops business. Intel will have decent graphic's in most all next generation offerings, if Nvidia can't provide solutions to OEMs and boutique builders they might not even be asked to show.

    AMD APU platforms along with Hybrid Cross-Fire could becoming noteworthy (it seems as though it might start to become relevant although we've heard that for many years), and we see much of the entry/low-end computer and laptops business warming to AMD's APU's. As entry systems APU offer similar low-end CPU class leading graphics within a efficient package. Plenty good to be starting some gaming, all while still fairly good efficiency/battery life as a standalone solution. If laptops from boutique builders can option with Hybrid Cross-Fire and see the added benefit of the dual GPU's, while efficiency of just the APU graphics when not in 3D stress gaming they get the best of both. In desktops very similar, buy a APU desktop and then later a 75W AMD card and you have benefit of dual GPU's, while not even worrying about the PSU. Lastly pehaps some boost form Mantle. (yet to be seen)

    For an Intel system and discrete Nvidia graphics, does Intel play-ball offering CPU SKu's with disabled graphics, while still a decent/cheap enough for builders to add Nvidia graphics and still tout class leading efficiency? Can or does the iGPU have a way of turning-off or disabling completely in the BIOS? If not might the iGPU section still be taking away some efficiency? do we se a time (if ever) when Intel permits the Nvidia chip to power down in non-stressful operations and hands-off to the Intel iGPU? (Is that something Intel see's in it's best interest long term?). While you aren't getting the anything like the Hybrid Cross-Fire advantage (ever?).

    Nvidia has to be as good on an Intel CPU perf/w and price, as an APU Hybrid Cross-Fire arrangement to stay viable. I could see many OEMs and boutique builders seeing APU as a better seller as an entry laptop. They can be sold with and APU offering solid graphics, while offering a MXM slot for a premium. Down the road perhaps they can get the sale on the discrete card, while even for new "higher" battery. Desktop's can tout the upgrade of "Hybrid Cross-Fire option", while still the efficiency of powering down when under less stressful (2D) workloads.
     
    Last edited: Feb 14, 2014
  11. Cataclysm_ZA

    Cataclysm_ZA New Member

    Joined:
    Nov 5, 2012
    Messages:
    21 (0.03/day)
    Thanks Received:
    1
    Location:
    Jeffreys Bay, South Africa
    To be honest, that's already the case with the market now. Most laptops don't have Nvidia graphics and only a few OEMs elect to use the cheaper stuff (GT710M, 720M, etc) in their machines.

    Maxwell, being much more efficient, could change entry-level graphics in mobile devices significantly, because the low-end desktop and laptops ranges are still using Fermi silicon. Tegra K1 is already the lowest level of implementation for Nvidia graphics, so it remains to be seen what they do with products that have two to three shader modules enabled, who they sell it to and how it performs. At best, I think, the jump will be similar to the move from Whistler to Mars for AMD.

    Nope. Its Optimus or nothing in Intel's case. Nvidia is really the third wheel in the mobile market right now.

    In the case of the South African market, over 90% of the laptops sold here have Intel APUs. Nvidia is found only in the Samsung and ASUS ranges in the middle market, AMD is restricted to a few netbooks and the MSI GX series and in the high-end laptops market Nvidia has the monopoly on discrete graphics in single and multi-GPU configurations.

    But that runs counter to the plan from Intel and AMD to move towards greater levels of integration. The less they have people fiddling with the insides, the better. Also, integrated systems give them even more money in the long run when it comes to spares and insurance claims.
     
  12. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,167 (0.91/day)
    Thanks Received:
    87
    Location:
    So. Cal.
    Thank for the responce

    Nvidia Optimus or "switch-able graphics", has been as one would believe hard(er) to implement then even AMD's ability to accomplish such seamless hand-shakes, and they own both of the parts trying.

    Why sure Intel/AMD and even most OE look toward that, but the boutique builders are like whatever sells... is Ok by them. And well it should be they-are-there and someone needs to be there to "step to the edge" and deliver the latest options in the small arena, especially if it means a buck.
     
  13. Fluffmeister

    Fluffmeister

    Joined:
    Dec 22, 2011
    Messages:
    682 (0.66/day)
    Thanks Received:
    196
  14. xorbe

    Joined:
    Feb 14, 2012
    Messages:
    398 (0.40/day)
    Thanks Received:
    57
    Location:
    Bay Area, CA
  15. Fluffmeister

    Fluffmeister

    Joined:
    Dec 22, 2011
    Messages:
    682 (0.66/day)
    Thanks Received:
    196
    Whilst using less power on the same process.

    Sweet.
     
  16. xorbe

    Joined:
    Feb 14, 2012
    Messages:
    398 (0.40/day)
    Thanks Received:
    57
    Location:
    Bay Area, CA
    Yes yes that's good news of course! I just thought it was funny (but expected of marketing) to use the performance disaster model to compare to. At least it wasn't the 630 / 620 / 610, lol. :roll:
     
  17. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    8,370 (2.55/day)
    Thanks Received:
    1,221

    You are funny. AMD does more with less die space, and they don't use Cuda. Their prior versions of VLIW were not even remotely comparable either. Considering the latest coolers easily tame the heat produced by the 290X your argument is irrelevant, and also lets not forget the knee jerk reaction of a 780Ti since the 290 makes the 780 an overpriced piece.
     
    10 Million points folded for TPU
  18. Mysteoa New Member

    Joined:
    Aug 23, 2013
    Messages:
    15 (0.03/day)
    Thanks Received:
    1
    You should have said is about custom coolers. The first time I read it, I thought it is about the reference cooler.
     

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page