• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Kepler Refresh GPU Family Detailed

Joined
Jun 27, 2011
Messages
6,666 (1.43/day)
Processor 7800x3d
Motherboard Gigabyte B650 Auros Elite AX
Cooling Custom Water
Memory GSKILL 2x16gb 6000mhz Cas 30 with custom timings
Video Card(s) MSI RX 6750 XT MECH 2X 12G OC
Storage Adata SX8200 1tb with Windows, Samsung 990 Pro 2tb with games
Display(s) HP Omen 27q QHD 165hz
Case ThermalTake P3
Power Supply SuperFlower Leadex Titanium
Software Windows 11 64 Bit
Benchmark Scores CB23: 1811 / 19424 CB24: 1136 / 7687
It's still too early to really know what's going on... we have rumors saying GK110 won't be a GTX 7xx card, rumors saying it is, rumors saying that the performance increase will be 15-25% on both sides, and now this, which is probably relying on old rumors from last January of the GK110 being ~45% faster than the 7970.

The 7xxx and 6xx round was pretty much a tie and I expect that to continue next round with no major shakeups from either green or red.

That is probably how it will turn up.
 

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,752 (1.78/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI GTX 1080 Ti Gaming X
Storage 3x SSDs 2x HDDs
Display(s) Dell U2412M + Samsung TA350
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
i love how everyone is saying AMD will have a hard time competing :roll: did everyone forget that yawn that is the 7970 GHz edition still beat out the GTX 680 and this gen for the most part each company is equal at the typical price points.

8970 is expected to be 40% faster than the 7970

GTX 780 is expected to be 40-55% faster than the 680

add in overclocking on both and we end up with the exact same situation as this generation. So in reality it just plain doesnt matter lol performance is all i care about and who gets product onto store shelfs and from their into my hands. Doesn't matter whos fastest if it takes 6 months for stock to catch up.
 

hoodlum

New Member
Joined
Oct 17, 2012
Messages
1 (0.00/day)
Low Power?

If you go back to the original linked article the performance gains for the GK114 and GK116 will only be 5-15%. That seems quite low considering the improvements to memory bandwidth, shaders, ROPs, etc. That would suggest nvidia may be focusing on even lower TDP than pure performance increases. And prices will be increasing too.

I think people may be disappointed by the time these are released. I suspect AMD will show similar improvements next year as well with more focus on TDP.
 
Joined
Mar 23, 2012
Messages
568 (0.13/day)
Processor Intel i9-9900KS @ 5.2 GHz
Motherboard Gigabyte Z390 Aorus Master
Cooling Corsair H150i Elite
Memory 32GB Viper Steel Series DDR4-4000
Video Card(s) RTX 3090 Founders Edition
Storage 2TB Sabrent Rocket NVMe + 2TB Intel 960p NVMe + 512GB Samsung 970 Evo NVMe + 4TB WD Black HDD
Display(s) 65" LG C9 OLED
Case Lian Li O11D-XL
Audio Device(s) Audeze Mobius headset, Logitech Z906 speakers
Power Supply Corsair AX1000 Titanium
i love how everyone is saying AMD will have a hard time competing :roll: did everyone forget that yawn that is the 7970 GHz edition still beat out the GTX 680 and this gen for the most part each company is equal at the typical price points.

I think, from reading pretty much every review of these cards, that the general impression this round is (wrongly) more favorable to Nvidia than AMD, and this carries over into forums/etc.

AMD did this to themselves because they released their 79xx cards fairly horridly underclocked (especially the 7950), and at price points that were too high. They didn't make a move on either front soon enough, and so when Kepler finally hit, reviewers were left looking at a situation where the 7970 was outperformed by a cheaper card. Then the 670 came in, trashed the 7950, and competed with AMD's previously $550 card at $150 less.

Those things defined the impressions most people have of this round. AMD then made the mistake of releasing their GHz edition as a reference card for reviewers, and most reviewers then dismissed it as too loud/etc.

You have to do a decent amount of homework before you start realizing that both companies at this point in time are pretty much dead even, and most people don't like to think that hard.

If AMD had released their 7970 clocked around 1050/1500 MHz for $500 at launch, and their 7950 at maybe 950/1400 for $400, I can guarantee you that the impressions would be different. Pretty much every single 7970/7950 will hit those clocks without messing with voltages, so I have no idea why they got so conservative. But they didn't make those moves, and so here we are.
 

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,752 (1.78/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI GTX 1080 Ti Gaming X
Storage 3x SSDs 2x HDDs
Display(s) Dell U2412M + Samsung TA350
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
they were conservative in order to get better yields essentially most chips yes can do 1050 but not all can at the proper voltage or TDP level, they also have to harvest chips for the 7950 lower clocks meant more chips more usable chips means greater volume to put on store shelves.

Regardless the refresh will probably see Nvidia take the lead but not by a whole lot they have more room to play when it comes to TDP than AMD does right now.
 
Joined
Mar 23, 2012
Messages
568 (0.13/day)
Processor Intel i9-9900KS @ 5.2 GHz
Motherboard Gigabyte Z390 Aorus Master
Cooling Corsair H150i Elite
Memory 32GB Viper Steel Series DDR4-4000
Video Card(s) RTX 3090 Founders Edition
Storage 2TB Sabrent Rocket NVMe + 2TB Intel 960p NVMe + 512GB Samsung 970 Evo NVMe + 4TB WD Black HDD
Display(s) 65" LG C9 OLED
Case Lian Li O11D-XL
Audio Device(s) Audeze Mobius headset, Logitech Z906 speakers
Power Supply Corsair AX1000 Titanium
I understand they did it for better yields, but I haven't seen a 7970 that wouldn't do 1050 on stock volts. I'm sure they're out there, but they've gotta be a tiny minority. I think AMD just flat out screwed up figuring out how they needed to clock their cards for viable yields.
 

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,752 (1.78/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI GTX 1080 Ti Gaming X
Storage 3x SSDs 2x HDDs
Display(s) Dell U2412M + Samsung TA350
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
probably but it doesnt matter much most overclocked 7970s on the market were already 1000-1100 mhz before the GHz edition dropped lol but i digress looking at the info available if AMD limits themselves to 32 ROPs again but increases shader count they will be beaten by NVIDIA. should AMD wise up and increase ROP count to 48 they stand a good chance of being within reach in that pre - overclocked models should fair well against Stock 780 time will tell of course.
 
Joined
Jun 27, 2011
Messages
6,666 (1.43/day)
Processor 7800x3d
Motherboard Gigabyte B650 Auros Elite AX
Cooling Custom Water
Memory GSKILL 2x16gb 6000mhz Cas 30 with custom timings
Video Card(s) MSI RX 6750 XT MECH 2X 12G OC
Storage Adata SX8200 1tb with Windows, Samsung 990 Pro 2tb with games
Display(s) HP Omen 27q QHD 165hz
Case ThermalTake P3
Power Supply SuperFlower Leadex Titanium
Software Windows 11 64 Bit
Benchmark Scores CB23: 1811 / 19424 CB24: 1136 / 7687
probably but it doesnt matter much most overclocked 7970s on the market were already 1000-1100 mhz before the GHz edition dropped lol but i digress looking at the info available if AMD limits themselves to 32 ROPs again but increases shader count they will be beaten by NVIDIA. should AMD wise up and increase ROP count to 48 they stand a good chance of being within reach in that pre - overclocked models should fair well against Stock 780 time will tell of course.

Can you explain what a ROP is and why it is/might be bottlenecking the 7970?
 

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,752 (1.78/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI GTX 1080 Ti Gaming X
Storage 3x SSDs 2x HDDs
Display(s) Dell U2412M + Samsung TA350
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
http://en.wikipedia.org/wiki/Render_Output_unit

Look back at the 5850 and 5870

clock both to the same clock speed the 5850 with less shaders but same ROP count was within 1-2% of the 5870 so increased shader count didnt do a whole hell of a lot

with GCN shaders scale a bit better yes but notice

7870 1280 GCN stream processors and 32 ROPs can take on the 7950 which is 32ROPs 1792 shaders etc

looking at previous GPUs

7770 = 640 shaders 16 ROPs, 10 Compute Units, 40 TMUs - 3Dmark 11 P3500
7870 = 1280 shaders 32 ROPs, 20 Compute Units, 80 TMUs - 3Dmark 11 P6600
7970 = 2048 shaders 32 ROPs, 32 Compute Units, 128 TMUs - 3Dmark 11 P8000

what 7970 probably looked like if following AMDs previous design philosphy
1920 shaders 48 Rops, 30 Compute Units, 120 TMUs add in higher GPU clock

for the 8970 being at the same 28nm its looking like AMD will push for 2500-2600 shaders many are saying 2560 but no one knows for sure yet

thats 25% increase in shaders however we can see from the 7870 to 7950 a 20-30% increase in shaders didnt do much for performance

AMD needs more ROPs and higher clocks for GCN to scale well with a large number of stream processors

so with just increasing shaders AMD wont get far they will need to up the # of compute units as well as TMUs and with that ROPs count needs to be bumped up to maintain a balanced GPU design Tweaks in architecture will help but a simple bump in shaders would mean that a heavily clocked 7970 could possible catch the 8970 if the basis of 40% is compared to the 925 Mhz stock cards in which case we see the 7970 at full overclocks pulling as far as 20% faster right now on avg. that would make a stock 8970 just 20% faster so a better balance and more optmized design is necessary.

NVIDIA already has their design finished, AMD on the other hand we can only hope didnt screw the pooch.
 
Last edited:
Joined
Apr 30, 2012
Messages
3,881 (0.89/day)

That deals with capacity something that nvidia complains very little of. The past 3 quaters they've "nvidia" has been complaining about wafer yields since they moved to a buy per wafer instead of a per working die.
Look up any Nvidia transcript this year and 28nm yields issues along with margins will be the dominate fall-back.

Nvidia is currently in talks with Samsung to use its 28nm fabs but Samsung is more expensive and Nvidia only uses Samsung for initial fab of desings and looks to Global Foundries and TSMC for production.
Samsung will have a open slot given there recent litigation with Apple and companies like Qualcomm, Nvidia and others will be looking to fill in that slot and Samsung will charge a premium i'm sure.
 
Joined
May 18, 2005
Messages
65 (0.01/day)
This just proves every thing I and many others said early on, the GTX 680 was to be the GTX 660Ti but AMD flopped and left the door open for Nvidia to cash in on a mid range part at high end prices. I could not wait and went with a pair of GTX670 FTW's to feed my needs and will do me for a few years I think.

It proves nothing. In fact, if anything, it shows nVidia didn't have a great, available GK100. Now that GK110 came out well, they may be releasing it as the high-end. You really need to not be so hung-up on codenames.

It's still too early to really know what's going on... we have rumors saying GK110 won't be a GTX 7xx card, rumors saying it is, rumors saying that the performance increase will be 15-25% on both sides, and now this, which is probably relying on old rumors from last January of the GK110 being ~45% faster than the 7970.

The 7xxx and 6xx round was pretty much a tie and I expect that to continue next round with no major shakeups from either green or red.

Considering the Tesla card specs have been outed by a CAD vendor recently accidentally (K20 card) with them up for order of GK110.... and 3dcenter tends to be pretty knowledgeable.... I would put my bet on this rumor being fairly accurate, pending good clock speeds at release for the GeForce variant.

Also, a useful post from OCN and my reply:

-----

Nowyn said:
Say the rumor is true.
We have 2496 CUDA Cores out of 2880 ie 2 SMX clusters disabled. That gives us 2496 - 1536 = 960 extra cores which is 62.5% more.
There are 12 more ROPs so there's 50% increase from 24 in GK104
Plus 384-bit bus which is 50% wider.
Sure core clock will be lower which so that will lower theoretical ROP and core performance increase. At 700 MHz it would result in 35% more ROP and 43,75% core performance increase. Memory controller is probably also tweaked, providing more than 50% bandwidth increase over GK104.
So 40%-50% ballpark is quite realistic depending on the final clocks.

Exactly... and we may see further optimizations ala the GF104 vs. GF114. I doubt it'll come in at "just" 700mhz, but if it does, it's still not outside the realm of possibility that it could be 50% faster out of the box.
 
Joined
Mar 23, 2012
Messages
568 (0.13/day)
Processor Intel i9-9900KS @ 5.2 GHz
Motherboard Gigabyte Z390 Aorus Master
Cooling Corsair H150i Elite
Memory 32GB Viper Steel Series DDR4-4000
Video Card(s) RTX 3090 Founders Edition
Storage 2TB Sabrent Rocket NVMe + 2TB Intel 960p NVMe + 512GB Samsung 970 Evo NVMe + 4TB WD Black HDD
Display(s) 65" LG C9 OLED
Case Lian Li O11D-XL
Audio Device(s) Audeze Mobius headset, Logitech Z906 speakers
Power Supply Corsair AX1000 Titanium
There's more than enough evidence to substantiate that the GK104 was drawn up to be the 660ti and not the 680...
 
Joined
May 18, 2005
Messages
65 (0.01/day)
There's more than enough evidence to substantiate that the GK104 was drawn up to be the 660ti and not the 680...

Oh, perhaps it was originally, but GK100 was certainly not "held back" so they could "put out a midrange card as high-end for mad profits!!!!" as some people like to proclaim.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.63/day)
Oh, perhaps it was originally, but GK100 was certainly not "held back" so they could "put out a midrange card as high-end for mad profits!!!!" as some people like to proclaim.

This is always what I thought. If nVidia could truly release a card twice as fast as what AMD has, using the same foundry, then they would, since that would ensure far more sales and profit than selling something that "saves on costs" instead.

In fact, had nVidia done this, to a degree, would amount to price fixing, and of course, is illegal.

Of course, now that both cards are here, and we can see the physical size of each chip, we can easily tell that this is certainly NOT the case, at all, so whatever, it's all just marketing drivel.

In fact, it wouldn't really be any different than AMD talking about Steamroller. :p "Man, we got this chip coming...";)
 
Joined
Mar 23, 2012
Messages
568 (0.13/day)
Processor Intel i9-9900KS @ 5.2 GHz
Motherboard Gigabyte Z390 Aorus Master
Cooling Corsair H150i Elite
Memory 32GB Viper Steel Series DDR4-4000
Video Card(s) RTX 3090 Founders Edition
Storage 2TB Sabrent Rocket NVMe + 2TB Intel 960p NVMe + 512GB Samsung 970 Evo NVMe + 4TB WD Black HDD
Display(s) 65" LG C9 OLED
Case Lian Li O11D-XL
Audio Device(s) Audeze Mobius headset, Logitech Z906 speakers
Power Supply Corsair AX1000 Titanium
Oh, perhaps it was originally, but GK100 was certainly not "held back" so they could "put out a midrange card as high-end for mad profits!!!!" as some people like to proclaim.

You are correct... the idea that it was intentionally held back is nonsense. However, the chip did disappear among a ton of rumors about yield problems, so it seems best to reason that they were forced into holding it back due to poor yields. Fortunately for them, they were able to hit the performance target they needed (set by AMD) with GK104.

Wound up being a big win for them on the business side of things (because it IS a midrange card from a manufacturing point of view, with a high end price) and a loss for consumers (who lost out on potentially much greater performance).
 

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,752 (1.78/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI GTX 1080 Ti Gaming X
Storage 3x SSDs 2x HDDs
Display(s) Dell U2412M + Samsung TA350
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
more likely it was held back because Nvidia needed to release something rather than face ongoing delays like they did with Fermi aka GTX 480 Gk104 offered plenty of performance and allowed them to keep GK110 in the wings for a refresh it essentially gave them a performance boost for the next series without need much more input and instead gave them time to further tweak the chip.

Its better to release a product when its truly ready than to release early with massive issues my guess is with Kepler Nvidia learned from their mistakes with Fermi and to great effect.
 
Joined
Apr 30, 2012
Messages
3,881 (0.89/day)
There's more than enough evidence to substantiate that the GK104 was drawn up to be the 660ti and not the 680...

If the GK104 was a true mid-size Nvidia would be making out like thiefs with a very profitable mid-range chip. Thats not what Nvidia has been saying in there quarterly reports and conference call to investors. They have been voicing concerns about production, yields and margins since there first report this year.
That theory doesnt really reflect Nvidias own stance and it makes less sense given 2 quater straight AMD has gain market share in discrete graphic sector.

Think thats more of a forum myth driven by fanboyism.

Think about it. As a company your loosing market share and sales down 1million units sold form quater to quarter. You'd think it be the opposite if your selling a mid-range chip at great profit for the high-end market.

If for some weird reason that would be true then its a horrible design and execution.
 
Joined
Mar 24, 2012
Messages
528 (0.12/day)
Oh, perhaps it was originally, but GK100 was certainly not "held back" so they could "put out a midrange card as high-end for mad profits!!!!" as some people like to proclaim.

the way i heard it GK100 was not held back but it was scrapped and redesign into GK110. IMO if AMD able to put out much better performance out of 7970 from the launch day maybe nvidia will be forced to use that scrapped GK100 as their flagship. but luckily for nvidia amd choose to be conservative with 7970 clock and somewhat nvidia was able to make GK104 to match 7970 performance. lol i think originally nvidia wants GK104 to be clocked around 700mhz and intend to market the card with 'overclockers dream' slogan just like they did with 460 and 560. :roll:
 
Joined
May 3, 2012
Messages
45 (0.01/day)
Location
Jakarta
System Name Clemot
Processor Intel Core i5-2500K
Motherboard MSI P67A-GD65
Cooling Thermalright HR-02 Macho
Memory Corsair CMX9 DDR3-1600 2x2GB
Video Card(s) Zotac GTX560Ti OC 1GB
Storage WD Black 1TB, Seagate 80GB 3xRaid0
Display(s) DELL E198WFP 19"
Case NZXT Guardian 921 RB Red
Audio Device(s) Creative Audigy 2 Platinum
Power Supply Xigmatec NRP-600w
Why Nvidia why? I ordered a GTX670 and its still stuck at the customs, and now this news. For the same price next year a probable GK110 for GTX780 & GTX770. Then that will mean a GTX760TI will preform equal to or better than GTX680, that will make GTX760 perform equal to or better then GTX670, and all with better prices than what people are paying for currently.... why Nvidia? I love performance increase but i already feel bad, i should think of selling the GTX670 sometime early next year if this information turns out to be true.

only buy nvidia at their revision stage,, that is their second refreshs after their major architecture change. GK110 looks sweet.
 
Joined
Mar 23, 2012
Messages
568 (0.13/day)
Processor Intel i9-9900KS @ 5.2 GHz
Motherboard Gigabyte Z390 Aorus Master
Cooling Corsair H150i Elite
Memory 32GB Viper Steel Series DDR4-4000
Video Card(s) RTX 3090 Founders Edition
Storage 2TB Sabrent Rocket NVMe + 2TB Intel 960p NVMe + 512GB Samsung 970 Evo NVMe + 4TB WD Black HDD
Display(s) 65" LG C9 OLED
Case Lian Li O11D-XL
Audio Device(s) Audeze Mobius headset, Logitech Z906 speakers
Power Supply Corsair AX1000 Titanium
If you assume that GK104 was drawn up originally to be the 680, as it eventually was, you have to come up with an explanation for:

-All the rumors and leaked info until late Jan/Feb of this year which had the GTX 680 being based on the GK110. That wasn't one or two isolated rumors... there was tons of info floating around indicating that to be the case. Almost NOTHING indicated GK104 to be the high end chip, not until GK110 completely disappeared and rumors of yield problems started cropping up all over.
-The limited memory bus (256 bit) on the GK104, which is typically reserved for mid level cards and not high-end
-The PCB design itself, most notably as it appears on the 670 (which is close to being a half-length PCB in the reference designs).

If you assume that GK110 was originally supposed to be the 680 and GK104 was to be the 660ti, as I do, it makes sense of the above information quite well. As for Nvidia not "making out like [a thief]", the explanation for that is readily apparent in their yield problems, which affected GK104 as well (remember - the GTX 680 was basically a paper launch for 2+ months). Also, aren't desktop GPUs a relatively low-profit/revenue area anyways from a business perspective?

We'll never know with 100% certainty, but I think that it makes better sense of the available data that the original GTX 6xx lineup was to include both Gk110 (680/670?) and GK104 (660ti/660).
 
Last edited:

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.63/day)
you have to come up with an explanation for:

You do not have to explain anything.

Period.


Die sizes say GK100 or whatever was never possible.

HD 7970:



GTX 680:




Note how the AMD chip has nearly 33% more transistors, but is barely physically larger than GTX 680.

If nVidia could have fit more functionality into the same space, they would have.


They could have planned to release something different all they wanted, but if they had, that chip would have to have been quite a bit larger than HD 7970 is.

Since nvidia is selling a chip that is much the same size as 7970. per wafer ,they aren't getting that many more chips.


If Nvidia is selling a mid-range chip as high-end, they either have HUGE HUGE HUGE design issues,


OR AMD is doing the exact same thing.


:roll:


Fact isd, GTX 680 ain't no mid-range chip, unless you beleive that most of that there chip is deactivated.
 
Joined
Mar 23, 2012
Messages
568 (0.13/day)
Processor Intel i9-9900KS @ 5.2 GHz
Motherboard Gigabyte Z390 Aorus Master
Cooling Corsair H150i Elite
Memory 32GB Viper Steel Series DDR4-4000
Video Card(s) RTX 3090 Founders Edition
Storage 2TB Sabrent Rocket NVMe + 2TB Intel 960p NVMe + 512GB Samsung 970 Evo NVMe + 4TB WD Black HDD
Display(s) 65" LG C9 OLED
Case Lian Li O11D-XL
Audio Device(s) Audeze Mobius headset, Logitech Z906 speakers
Power Supply Corsair AX1000 Titanium
You do nto have to explain anything.

Die sizes say GK100 or whatever was never possible.

This doesn't make much sense... why do we now have rumors of that same GK110 being released? Die size constraints will still be there... if the die size were the inherent problem here, GK110 would have been scrapped and we wouldn't be reading this article right now.
 
Joined
Mar 23, 2012
Messages
568 (0.13/day)
Processor Intel i9-9900KS @ 5.2 GHz
Motherboard Gigabyte Z390 Aorus Master
Cooling Corsair H150i Elite
Memory 32GB Viper Steel Series DDR4-4000
Video Card(s) RTX 3090 Founders Edition
Storage 2TB Sabrent Rocket NVMe + 2TB Intel 960p NVMe + 512GB Samsung 970 Evo NVMe + 4TB WD Black HDD
Display(s) 65" LG C9 OLED
Case Lian Li O11D-XL
Audio Device(s) Audeze Mobius headset, Logitech Z906 speakers
Power Supply Corsair AX1000 Titanium
it WAS scrapped.

Did you read the article?

When it's time for Kepler Refresh to go to office, TSMC will have refined its process enough for mass-production of GK110, a 7.1 billion transistor chip on which NVIDIA's low-volume Tesla K20 GPU compute accelerator is currently based.
The GK110 will take back the reins of powering NVIDIA's flagship single-GPU product, the GeForce GTX 780. This product could offer a massive 40-55% performance increase over GeForce GTX 680, with a price ranging anywhere between US $499 and $599. The same chip could even power the second fastest single-GPU SKU, the GTX 770. The GK110 physically packs 2880 CUDA cores, and a 384-bit wide GDDR5 memory interface.

Doesn't sound like "scrapped" to me... unless you want to argue that this is just bogus, which it could be.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.63/day)
Did you read the article?



Doesn't sound like "scrapped" to me... unless you want to argue that this is just bogus.

I'm not arguing that it is bogus.

Not at all.

But the fact of the matter is that what nVidia can do with TSMC's 28nm, AMD can as well.

And AMD's already 33% more efficient in used die space.

If you beleive the 7.1 billion transistor thing, than it must be twice as big as current GTX680 silicon(3078 Million transitors, BTW), or current GTX 680 really is a horrible horrible design, and it's a feat of wonder that nvidia managed to get it stable.

And how does a doubling of transistors only equal a 55% increase in performance?

Oh, I read it just fine. :p


Argue that it's bogus... :roll:
 
Top