Wednesday, February 12th 2014

GM107 Features 128 CUDA Cores Per Streaming Multiprocessor

NVIDIA's upcoming GM107 GPU, the first to be based on its next-generation "Maxwell" GPU architecture, reportedly features a different arrangement of CUDA cores and streaming multiprocessors to those typically associated with "Kepler," although the component hierarchy is similar. The chip reportedly features five streaming multiprocessors, highly integrated computation subunits of the GPU. NVIDIA is referring to these parts as "streaming multiprocessor (Maxwell)," or SMMs.

Further, each streaming multiprocessor features 128 CUDA cores, and not the 192 CUDA cores found in SMX units of "Kepler" GPUs. If true, GM107 features 640 CUDA cores, all of which will be enabled on the GeForce GTX 750 Ti. If NVIDIA is carving out the GTX 750 by disabling one of those streaming multiprocessors, its CUDA core count works out to be 512. NVIDIA will apparently build two GPUs on the existing 28 nm process, the GM107, and the smaller GM108; and three higher performing chips on the next-generation 20 nm process, the GM206, the GM204, and the GM200. The three, as you might have figured out, succeed the GK106, GK104, and GK110, respectively.
Source: VideoCardz
Add your own comment

42 Comments on GM107 Features 128 CUDA Cores Per Streaming Multiprocessor

#26
WhoDecidedThat
XzibitNo kidding it has more L2 cache than the TITAN/780 Ti

GM107 = 2mb
GK107 = 256kb
GK110 = 1.5mb
Hawaii = 1mb

12mb of L2 cache on die soon... :respect:
GM107 has 2 MegaByte cache?? I find that extremely hard to believe. It must be 2 Megabit or 256 KiloByte. It may be 512 KiloByte at maximum. But 2 MegaByte?? I'll believe it when I see it.
Posted on Reply
#27
WhoDecidedThat
RecusNvidia is always achieve more with less Cuda cores since it was introduced:
8800 GTX/128 - HD 2900/320
GTX 285/240 - HD 4890/800
GTX 480/480 - HD 5870/1600
GTX 580/512 - HD 6970/1536
GTX 680/1536 - HD 7970/2048
GTX 780/2304-2880 - R9 290X/2816

Now both companies tied and Nvidia optimize their cores again. One day AMD will realize that something is not right with 10000 SP, 700 TDP, 199,9°C.
You can't compare pre-Kepler CUDA cores with AMD GPUs. They had a shader clock remember??

Secondly, pre-GCN AMD cores were much simpler than CUDA cores.
Posted on Reply
#28
Prima.Vera
RecusNvidia is always achieve more with less Cuda cores since it was introduced:
8800 GTX/128 - HD 2900/320
GTX 285/240 - HD 4890/800
GTX 480/480 - HD 5870/1600
GTX 580/512 - HD 6970/1536
GTX 680/1536 - HD 7970/2048
GTX 780/2304-2880 - R9 290X/2816

Now both companies tied and Nvidia optimize their cores again. One day AMD will realize that something is not right with 10000 SP, 700 TDP, 199,9°C.
Sorry, but you can NOT compare CUDA cores and AMD's GPU cores. They have totally different architecture, design and approach.
Posted on Reply
#29
Xzibit
RecusNvidia is always achieve more with less Cuda cores since it was introduced:
:laugh:

He thinks AMD uses CUDA cores too. :shadedshu: :banghead:

:roll:

:lovetpu:
Posted on Reply
#30
Recus
blanarahulYou can't compare pre-Kepler CUDA cores with AMD GPUs. They had a shader clock remember??

Secondly, pre-GCN AMD cores were much simpler than CUDA cores.
Prima.VeraSorry, but you can NOT compare CUDA cores and AMD's GPU cores. They have totally different architecture, design and approach.
I was trying to explain, badtaylorx, why Maxwell doesn't look weak to me but it turns into butthurt to someone.
Xzibit:laugh:

He thinks AMD uses CUDA cores too. :shadedshu: :banghead:

:roll:

:lovetpu:
Where did I said that? If AMD use Cuda cores I would say "10000 Cuda cores, 700 TDP, 199,9°C" but I said SP (Stream processors).
Posted on Reply
#31
BiggieShady
blanarahulGM107 has 2 MegaByte cache?? I find that extremely hard to believe. It must be 2 Megabit or 256 KiloByte. It may be 512 KiloByte at maximum. But 2 MegaByte?? I'll believe it when I see it.
It's L2 cache so it is a single memory pool for the whole GPU and because new shader modules are divided in 4 subblocks each with L1 cache (light blue color on diagram) it's not so far fetched that L2 needed to be quadrupled from 512 KB to 2 MB ... but, I'll give you that, it is much more likely for GM200 rather than for GM107. Wait and see, I guess.
Posted on Reply
#32
WhoDecidedThat
BiggieShadyI'll give you that, it is much more likely for GM200 rather than for GM107. Wait and see, I guess.
This is the only part of your comment that I could comprehend. Can you please elaborate?
Posted on Reply
#33
BiggieShady
blanarahulThis is the only part of your comment that I could comprehend. Can you please elaborate?
When you have multiple core architecture (GPU is extreme example of that) than L1 and L2 cache memory has to be organized hierarchically (multiple level memory paging algorithms etc.)
NVidia changed how SMM is internally subdivided on 4 blocks each with L1 cache (Kepler has simpler L1 cache organization per SMX) so larger L2 cache is needed to accommodate changes in the cache hierarchy.
Posted on Reply
#34
Casecutter
Nvidia has to get really efficient at this level or they're just going to lose all that new entry/low-end computer and laptops business. Intel will have decent graphic's in most all next generation offerings, if Nvidia can't provide solutions to OEMs and boutique builders they might not even be asked to show.

AMD APU platforms along with Hybrid Cross-Fire could becoming noteworthy (it seems as though it might start to become relevant although we've heard that for many years), and we see much of the entry/low-end computer and laptops business warming to AMD's APU's. As entry systems APU offer similar low-end CPU class leading graphics within a efficient package. Plenty good to be starting some gaming, all while still fairly good efficiency/battery life as a standalone solution. If laptops from boutique builders can option with Hybrid Cross-Fire and see the added benefit of the dual GPU's, while efficiency of just the APU graphics when not in 3D stress gaming they get the best of both. In desktops very similar, buy a APU desktop and then later a 75W AMD card and you have benefit of dual GPU's, while not even worrying about the PSU. Lastly pehaps some boost form Mantle. (yet to be seen)

For an Intel system and discrete Nvidia graphics, does Intel play-ball offering CPU SKu's with disabled graphics, while still a decent/cheap enough for builders to add Nvidia graphics and still tout class leading efficiency? Can or does the iGPU have a way of turning-off or disabling completely in the BIOS? If not might the iGPU section still be taking away some efficiency? do we se a time (if ever) when Intel permits the Nvidia chip to power down in non-stressful operations and hands-off to the Intel iGPU? (Is that something Intel see's in it's best interest long term?). While you aren't getting the anything like the Hybrid Cross-Fire advantage (ever?).

Nvidia has to be as good on an Intel CPU perf/w and price, as an APU Hybrid Cross-Fire arrangement to stay viable. I could see many OEMs and boutique builders seeing APU as a better seller as an entry laptop. They can be sold with and APU offering solid graphics, while offering a MXM slot for a premium. Down the road perhaps they can get the sale on the discrete card, while even for new "higher" battery. Desktop's can tout the upgrade of "Hybrid Cross-Fire option", while still the efficiency of powering down when under less stressful (2D) workloads.
Posted on Reply
#35
Cataclysm_ZA
CasecutterNvidia has to get really efficient at this level or they're just going to lose all that new entry/low-end computer and laptops business. Intel will have decent graphic's in most all next generation offerings, if Nvidia can't provide solutions to OEMs and boutique builders they might not even be asked to show.
To be honest, that's already the case with the market now. Most laptops don't have Nvidia graphics and only a few OEMs elect to use the cheaper stuff (GT710M, 720M, etc) in their machines.

Maxwell, being much more efficient, could change entry-level graphics in mobile devices significantly, because the low-end desktop and laptops ranges are still using Fermi silicon. Tegra K1 is already the lowest level of implementation for Nvidia graphics, so it remains to be seen what they do with products that have two to three shader modules enabled, who they sell it to and how it performs. At best, I think, the jump will be similar to the move from Whistler to Mars for AMD.
CasecutterFor an Intel system and discrete Nvidia graphics, does Intel play-ball offering CPU SKu's with disabled graphics, while still a decent/cheap enough for builders to add Nvidia graphics and still tout class leading efficiency?
Nope. Its Optimus or nothing in Intel's case. Nvidia is really the third wheel in the mobile market right now.

In the case of the South African market, over 90% of the laptops sold here have Intel APUs. Nvidia is found only in the Samsung and ASUS ranges in the middle market, AMD is restricted to a few netbooks and the MSI GX series and in the high-end laptops market Nvidia has the monopoly on discrete graphics in single and multi-GPU configurations.
CasecutterI could see many OEMs and boutique builders seeing APU as a better seller as an entry laptop. They can be sold with and APU offering solid graphics, while offering a MXM slot for a premium. Down the road perhaps they can get the sale on the discrete card, while even for new "higher" battery.
But that runs counter to the plan from Intel and AMD to move towards greater levels of integration. The less they have people fiddling with the insides, the better. Also, integrated systems give them even more money in the long run when it comes to spares and insurance claims.
Posted on Reply
#36
Casecutter
Thank for the responce

Nvidia Optimus or "switch-able graphics", has been as one would believe hard(er) to implement then even AMD's ability to accomplish such seamless hand-shakes, and they own both of the parts trying.
Cataclysm_ZABut that runs counter to the plan from Intel and AMD to move towards greater levels of integration.
Why sure Intel/AMD and even most OE look toward that, but the boutique builders are like whatever sells... is Ok by them. And well it should be they-are-there and someone needs to be there to "step to the edge" and deliver the latest options in the small arena, especially if it means a buck.
Posted on Reply
#39
Fluffmeister
Whilst using less power on the same process.

Sweet.
Posted on Reply
#40
xorbe
FluffmeisterWhilst using less power on the same process.

Sweet.
Yes yes that's good news of course! I just thought it was funny (but expected of marketing) to use the performance disaster model to compare to. At least it wasn't the 630 / 620 / 610, lol. :roll:
Posted on Reply
#41
Steevo
RecusNvidia is always achieve more with less Cuda cores since it was introduced:
8800 GTX/128 - HD 2900/320
GTX 285/240 - HD 4890/800
GTX 480/480 - HD 5870/1600
GTX 580/512 - HD 6970/1536
GTX 680/1536 - HD 7970/2048
GTX 780/2304-2880 - R9 290X/2816

Now both companies tied and Nvidia optimize their cores again. One day AMD will realize that something is not right with 10000 SP, 700 TDP, 199,9°C.
You are funny. AMD does more with less die space, and they don't use Cuda. Their prior versions of VLIW were not even remotely comparable either. Considering the latest coolers easily tame the heat produced by the 290X your argument is irrelevant, and also lets not forget the knee jerk reaction of a 780Ti since the 290 makes the 780 an overpriced piece.
Posted on Reply
#42
Mysteoa
SteevoYou are funny. AMD does more with less die space, and they don't use Cuda. Their prior versions of VLIW were not even remotely comparable either. Considering the latest coolers easily tame the heat produced by the 290X your argument is irrelevant, and also lets not forget the knee jerk reaction of a 780Ti since the 290 makes the 780 an overpriced piece.
You should have said is about custom coolers. The first time I read it, I thought it is about the reference cooler.
Posted on Reply
Add your own comment
May 3rd, 2024 09:52 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts