• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce Kepler Packs Radically Different Number Crunching Machinery

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
it takes 3 times the number of Kepler shader units to equal a Fermi Shader unit. because shader clocks will be equal to raster clocks on the Kepler.

Whaaaaaaat?? Hot clocks are 2x times the core clock, not 3x times, so I can't even start thinking why you'd think you need 3 times as many shaders. I don't even know where you are pulling that claim from but it doesn't smell any good.

You can call me fanboy, because I'm stating the facts (as if I cared), but at least make up an argument that doesn't sound so stupid. At least I didn't make an account just to crap on a forum with my only 4 posts.

GK110 is the one we want and since it has just taped out, it will not be released until Q3. Sorry green fans :)

Pff I don't know why I even cared to respond to you. I guess I didn't pay attention the first time. ^^ Freudian slip huh? :roll:

Ey you got me for 3 posts, is that considered a success in Trolland? Congrats anyway.
 
Joined
Apr 27, 2011
Messages
53 (0.01/day)
Indeed its pointless to look at FLOPS if we don't know the efficiency of the architecture. Radeons had far higher theoretical numbers, but the efficiency was far lower than with Fermi.

Personally, 256-bit memory bus is enough to say this won't beat Tahiti, at best it will equal it, but given Nvidia's usually slower memory clocks I find it unlikely. A GTX 580 replacement is most likely IMO, and it could be really good at that. Nvidia's best cards have usually been the high midrange like 8800gt or GTX 460.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Indeed its pointless to look at FLOPS if we don't know the efficiency of the architecture. Radeons had far higher theoretical numbers, but the efficiency was far lower than with Fermi.

Flops are not a linearly and directly related to performance, but they are not meaningfull either. Like I said in a previous post, Nvidia abandoned hot-clocks and put 2x as many SPs as in Fermi (GF104). They could have released a GK104 that consisted in 768 SPs while still using hot-clocks, but they did what they did instead. Obviously because it's better, or they wouldn't have changed it in the frst place. It's safe to assume a similar efficiency, since schedulers can still issue the ops in the exact same way as in Fermi but instead of issuing twice per clock because shaders run at twice the clock they will issue to 2 different SIMDs, because there's twice as many SIMDS, that is intead of S1-S2-S3-S1-S2-S3 they will do S1-S2-...-S6.

Personally, 256-bit memory bus is enough to say this won't beat Tahiti, at best it will equal it, but given Nvidia's usually slower memory clocks I find it unlikely. A GTX 580 replacement is most likely IMO, and it could be really good at that. Nvidia's best cards have usually been the high midrange like 8800gt or GTX 460.

Nvidia also did much better with lower BW*. Memory bandwidth is never a problem. Really how many times do we need to hear the same thing? HD5770 comes to mind. Really AMD, Nvidia, noone will ever release a card that is severely held back by memory bandwidth. I can tell you something, they would never put so many SPs and 128 TMUs only to find them severely held back by the bus.

The GTX460 was a cut down version BTW and it was cut down on purpose so that it was not close to GTX470, completely nullifying it. The full chip only came with GTX560 Ti and this one is a good >30% faster than previous generation (real gen) GTX285. Based on the specs, codename G*104 and market segment it's absolutely clear that GK104 will handily beat GTX580 (just like GF104 >>>>> GT200), at least up until 1920x1200 and will most probably beat Tahiti too.

* GTX560 Ti has 128 GB/s and HD6950 160 GB/s, that's a 25% difference and same performance.
 
Last edited:
Joined
Apr 27, 2011
Messages
53 (0.01/day)
I don't care to argue about something that's pure conjecture at this point, but if you're really expecting GK104 to have 50% more shader performance than 580, you're in for a disappointment.

There's always people who expect 2x performance increases when a new gen arrives and they eventually get disappointed when the real product comes along. If GK104 is smaller than Tahiti in die size, I find it very unlikely it'll manage to beat it in performance. IF Kepler really has a new arch it might skew things, but in the past gens Nvidia has had far bigger dies fighting AMD's smaller chips. I doubt it'll change much now. That's all I'll comment on this, since we don't even know if the news piece has a word of truth in it.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
If GK104 is smaller than Tahiti in die size, I find it very unlikely it'll manage to beat it in performance.

Tahiti has 4.3 billion transistors. GF104 had 1.9 billion. They could have mirrored/doubled up GF104. GTX560 Ti on SLI, even with (often times bad) SLI scaling handily beats both GTX580 and HD7970. I think it's even faster than GTX590/HD6990.

It's very very clear from specs that this is 2x GF104, except memory bus. Twice as many GPCs, twice as many TMUs, twice as many SPs if you think of GF104 as a 768 SP part running at core clock... You may think unlikely to beat it, I think it's a given. It's not about hopes and dissapointment, if anyone really believes that Nvidia will release a chip with 100% more transistors than GF104 and 50% more transistors than GF110 without easily beating it... they are fucking crazy man. That'd mean 50% of transistors going down the drain or 100% more trannies failing to improve performance by a mere 30%. That is not gonna happen I tell you.

IF Kepler really has a new arch it might skew things, but in the past gens Nvidia has had far bigger dies fighting AMD's smaller chips. I doubt it'll change much now.

In the past Nvidia was using a lot of space for compute*. AMD didn't. Now AMD does with GCN and AMD has a far bigger die, as in twice as big, Tahiti, being only 30% faster than it's predecessor. AMD's gaming efficiency went down dramatically and that's a fact that anyone can see. IF Nvidia's efficiency went up a little bit, that's all they need for an easy win.

*Yet, based on number of transistors and performance Cayman and GF110 were actually very close in efficiency.
 

jamsbong

New Member
Joined
Mar 17, 2010
Messages
83 (0.02/day)
System Name 2500Kjamsbong
Processor Core i5 2500K @ 4.6Ghz
Motherboard Asrock Extreme 4 Z68
Cooling Zalman Reserator (CPU and GPU)
Memory DDR3 8GB
Video Card(s) EVGA Nvidia 560Ti 1GB
Storage 60GB Kingston SSD
Display(s) 24" Dell IPS
Case CoolerMaster 690 Advanced II
Audio Device(s) on-board
Power Supply Zalman ZM-600HP modular 600watt
Software Windows 7
First of all, the name GK104, 256bit memory and small 340mm2 die size all indicate that it will be a mid-high end card. Nvidia should have something better, when I don't know but I'm sure it won't be far away.
Realistic expectation is that it will be faster than Cayman and possibly on par with GTX580.

The CU from 8800 to Fermi were design to crunch numbers efficiently but at the cost of large die areas and power comsumption. Which is why Nvidia chip is always beastly large and consumes a lot of power. I suspect the motivation to switch to an ATI like CU is motivated by cost reduction (by reducing die size) and better TFLOP/watt rating.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
First of all, the name GK104, 256bit memory and small 340mm2 die size all indicate that it will be a mid-high end card. Nvidia should have something better, when I don't know but I'm sure it won't be far away.
Realistic expectation is that it will be faster than Cayman and possibly on par with GTX580.

Remember this is not mid-range as in before Fermi where they had 3 chips, high end, mid-range (1/2 the high end) and low end (1/4). For Fermi they introduced the performance part, which is 3/4 of the high-end (AMD did the same with Barts and now again with Pitcairn). GK104 is such part. In Fermi such part was GF104 and it is around 40% faster than GTX285, while GF110 is 80% faster than GTX285.

Nvidia has always aimed at 2x the speed as previous gen, which is noted by the double up in SPs, TMUs, etc. Depending on the success that trying to double up performance has yielded a 60-80% increase in performance gen to gen. It's really safe to assume then a similar increase this time around. So let's say the high end Kepler is only 50% faster (low end of the spectrum), that means that if GTX580 produces 100 fps, GK100/110 (whatever) would produce 150 and GK104 by being 3/4 of the high-end chip would produce 112 fps. 12% over GTX580, pretty damn close to Tahiti.

This is for the low end of the spectrum. Do the calc if just like GTX285->GTX580 Nvidia did a 80% increase again.
 
Joined
Apr 4, 2008
Messages
4,686 (0.80/day)
System Name Obelisc
Processor i7 3770k @ 4.8 GHz
Motherboard Asus P8Z77-V
Cooling H110
Memory 16GB(4x4) @ 2400 MHz 9-11-11-31
Video Card(s) GTX 780 Ti
Storage 850 EVO 1TB, 2x 5TB Toshiba
Case T81
Audio Device(s) X-Fi Titanium HD
Power Supply EVGA 850 T2 80+ TITANIUM
Software Win10 64bit
How big of a difference does this power of 2 stuff really make? Like if the 7970 a 512 bus and much slower ram to match the same bandwidth it has now would it actually perform better?
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
How big of a difference does this power of 2 stuff really make? Like if the 7970 a 512 bus and much slower ram to match the same bandwidth it has now would it actually perform better?

It makes no difference really and cards are still made oflots of small power of two chunks. The 384 bit memory controler is really 6 x 64 bit memory controlers each controling one memory module so there's your power of 2. Shaders both in AMD and NVidia architecture are composed of 16 shader wide arrays, SIMDs, which is what really does the hard and fundaental work, so power of 2 again, TMUs and ROPs are typically clustered in groups of 4 or 8... but really it makes no real difference. It's like that for convenience, until I hear the opposite. Rendering typically works on quads of pixels, 2x2 or 4x4 so that's why they tend to make it that way for GPUs. Other than that there's no reason that I know of.
 

jamsbong

New Member
Joined
Mar 17, 2010
Messages
83 (0.02/day)
System Name 2500Kjamsbong
Processor Core i5 2500K @ 4.6Ghz
Motherboard Asrock Extreme 4 Z68
Cooling Zalman Reserator (CPU and GPU)
Memory DDR3 8GB
Video Card(s) EVGA Nvidia 560Ti 1GB
Storage 60GB Kingston SSD
Display(s) 24" Dell IPS
Case CoolerMaster 690 Advanced II
Audio Device(s) on-board
Power Supply Zalman ZM-600HP modular 600watt
Software Windows 7

creepingdeath

New Member
Joined
Dec 24, 2011
Messages
14 (0.00/day)
Benetanegia, does Jen-Hsun Huang give you handjobs for every post you make? The fanboydom has crossed the ridiculous threshold. Just understand that the performance you claim isn't possible with the architecture and specs given, especially with hotclocks gone. GK110 also isn't released until Q3, hopefully you won't lose to much sleep over that :laugh: So GK104 will produce a high end part, because I don't see nvidia releasing a mid range GK104 card and not having a corresponding high end card (GK110) until Q3. GK104 may come close to beating tahiti, but its definitely not a tahiti killer. Charlie from SA commented that it is so far 10-20% slower in non physx titles than tahiti. And before you whine about charlie, all of his leaks so far have been accurate, ALL of his fermi leaks were accurate. Remember GK110 just taped out so it is entering ES and validation phase, which always takes 6-8 months.

Now I expect you'll go on about how GK110 is being released next week (even though it just taped out and hasn't entered validation phase yet). Like I said don't lose sleep, don't get too hurt over this.

Now hopefully the GK104 is close to the GTX 580 for a much lower price point, I could use a replacement to my old 570.
 
Last edited:
Joined
Oct 29, 2010
Messages
2,972 (0.60/day)
System Name Old Fart / Young Dude
Processor 2500K / 6600K
Motherboard ASRock P67Extreme4 / Gigabyte GA-Z170-HD3 DDR3
Cooling CM Hyper TX3 / CM Hyper 212 EVO
Memory 16 GB Kingston HyperX / 16 GB G.Skill Ripjaws X
Video Card(s) Gigabyte GTX 1050 Ti / INNO3D RTX 2060
Storage SSD, some WD and lots of Samsungs
Display(s) BenQ GW2470 / LG UHD 43" TV
Case Cooler Master CM690 II Advanced / Thermaltake Core v31
Audio Device(s) Asus Xonar D1/Denon PMA500AE/Wharfedale D 10.1/ FiiO D03K/ JBL LSR 305
Power Supply Corsair TX650 / Corsair TX650M
Mouse Steelseries Rival 100 / Rival 110
Keyboard Sidewinder/ Steelseries Apex 150
Software Windows 10 / Windows 10 Pro
Here's another source confirming these specs:

http://www.brightsideofnews.com/new...k1042c-geforce-gtx-670680-specs-leak-out.aspx

The interesting part and I've seen this speculated in different forums, is that GK104 will be the GTX680. If this will be confirmed and based on the specs which seem to be right, I am pretty sure this card will be better than the 7970. Can't think that NV will release a GTX*80 that is slower than AMD's flagship.

I remember one post in another forum with a guy being invited to a CUDA event at the end of January and he reported back that he saw a CUDA demo running on an unspecified 28nm part which was on average 28% faster than a GTX580. Based on these specs it is entirely possible that this will be close to the performance of GK104.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
I think the real world test only shows that GF104 = GTX285 and GTX580 is 52% faster than GTX285.
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/27.html

Based on REAL WORLD values, I can expect on average, the new Kepler GK104 to be on par with GTX580. Which is what I said in previous 2 posts. So slower than Tahiti.

There's no full GF104 there, only the severely capped (both shaders and clocks) GTX460. Full GF104 would be the GTX560 Ti like I said. Also there, the GTX580 is far more than 50% faster. i.e at 1920x1200 GTX285 61%, while GTX580 100%, so 100/61 = 64% faster.

But the above is with release drivers. If you look at a more modern review, you will see that the GTX580 is around 80% faster. And GTX560 Ti (full GF104/GF114) is around 40% faster.

GK104 will handily beat GTX580 just like GF104 handily beats GTX285, based on REAL WORLD values, and specs.

Nvidia (nobody) would put 100% more SPs, 100% more TMUs, 100% more geometry, 100% more tesselators and ultimately 100% bigger die, just to let it be only 30% faster than it's predecessor (GTX560 Ti). It's not going to happen no matter how many times you repeat it to yourself. The mid-range used to be just as fast as it's predecessor when midrange meant 1/2 high-end, now that upper midrange or performance segment means 3/4 high-end, the performance chip will always be faster.

You say we don't know the efficiency of the shaders, but right next to that you claim (indirectly) that Kepler's efficiency not only on shaders but also in TMU, geometry and literally everything is 50% of what it is in GF104. It's absurd. We don't know the efficiency, right, so for all we know the efficiency might be better too so we could just as easily say it will be 3x times faster assuming 50% better efficiency and that would NOT be more outrageous than your claim saying that it MUST be only as fast as GTX580 while it has 2x the specs (hence 50% efficiency).

I'm not claiming anything from outside this world. GK104 has almost twice the Gflops, more than twice the texel fillrate, geometry and pretty much everything else is 25% faster than GK110, because clocks are 25% higher and has the same number of GPCs and SM (tesselators). And with such a massive difference in specs, a massive difference that suggests anything from 50% faster to 150% faster, with such a massive difference, mind you, I'm just saying that it will be 25% faster. It's not an outrageous claim, it's a very very conservative guesstimate, and hopes/fanboism has nothing to do with it (this is for the troll). Neither does what happened in the past, it is just spec comparison. And then the evidence of the past just corroborates the plausability of my claim. Stay tunned because my crystall ball says I'm being very conservative, but 25% over GTX580 is what I'll claim for the time being.
 
Last edited:

jamsbong

New Member
Joined
Mar 17, 2010
Messages
83 (0.02/day)
System Name 2500Kjamsbong
Processor Core i5 2500K @ 4.6Ghz
Motherboard Asrock Extreme 4 Z68
Cooling Zalman Reserator (CPU and GPU)
Memory DDR3 8GB
Video Card(s) EVGA Nvidia 560Ti 1GB
Storage 60GB Kingston SSD
Display(s) 24" Dell IPS
Case CoolerMaster 690 Advanced II
Audio Device(s) on-board
Power Supply Zalman ZM-600HP modular 600watt
Software Windows 7
@Benetanegia OK, It is very obvious that you're a supa-dupa Nvidia fanboy. That is fine... How else Nvidia can stay afloat without support such as one like yourself.

Without getting into a fight over speculative unknown future performance of Kepler, lets get some facts straight:
GF104 = GTX460
GF114 = GTX560 TI

I suggest you do some PROPER homework before spilling out lots of nonsense.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
@Benetanegia OK, It is very obvious that you're a supa-dupa Nvidia fanboy. That is fine... How else Nvidia can stay afloat without support such as one like yourself.

Without getting into a fight over speculative unknown future performance of Kepler, lets get some facts straight:
GF104 = GTX460
GF114 = GTX560 TI

I suggest you do some PROPER homework before spilling out lots of nonsense.

My friend do your own homework. GF114 and GF104 are both the exact same chip. GF104 had disabled parts just like GF100 had disabled parts. Maybe you would look more intelligent and help your casuse if you spent more time checking your facts and less time calling people fanboy.

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_560_Ti/

Getting into the fine print of NVIDIA's offer, the GeForce GTX 560 Ti is based on NVIDIA's new GF114 chip. As far as its specifications and transistor-count go, the GeForce GTX 560 Ti is identical to the GF104 on which GTX 460 was based, except that it has all 384 of the CUDA cores physically present enabled

http://www.techpowerup.com/reviews/Zotac/GeForce_GTX_460_1_GB/

NVIDIA's GeForce Fermi (GF) 104 GPU comes with 384 shaders (CUDA cores) in the silicon but NVIDIA has disabled 48 of them to reach their intended performance targets and to improve GPU harvesting.



I'm EXTREMELY curious as to how are you going to (try to) spin this in your favor.
 
Last edited:
Joined
Nov 4, 2005
Messages
11,687 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
Let's remember the 6970 has 2.7 TFlops while GTX580 has something like 1.5 so if we are talking about gaming benchmarks I don't think that's a factor.

That would apply if we were comparing VLIW to CUDA, however we are comparing close to the same architecture.
 
Joined
Sep 15, 2011
Messages
6,469 (1.41/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
What, what is MLAA? It's useless, ATi has no innovative feature's like that! :rolleyes::rolleyes:

I like nvidia's FXAA to, but the biggest problem is NOT implemented in drivers in control center like ATI. And there are so many games out there that don't have any AA support.

Is it that difficult nvidia to implement FXAA into drivers also????:shadedshu:shadedshu:shadedshu
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
I like nvidia's FXAA to, but the biggest problem is NOT implemented in drivers in control center like ATI. And there are so many games out there that don't have any AA support.

Is it that difficult nvidia to implement FXAA into drivers also????:shadedshu:shadedshu:shadedshu

Nvidia "liberated" FXAA for anyone to use it, so it's open and afaik there's many FXAA injectors out there.

I don't know if they work with all games because I wouldn't use it and don't really care about them. Personally I find both FXAA and MLAA to degrade visual quality rather than enhance it (textures mainly, but also shaders).
 

jamsbong

New Member
Joined
Mar 17, 2010
Messages
83 (0.02/day)
System Name 2500Kjamsbong
Processor Core i5 2500K @ 4.6Ghz
Motherboard Asrock Extreme 4 Z68
Cooling Zalman Reserator (CPU and GPU)
Memory DDR3 8GB
Video Card(s) EVGA Nvidia 560Ti 1GB
Storage 60GB Kingston SSD
Display(s) 24" Dell IPS
Case CoolerMaster 690 Advanced II
Audio Device(s) on-board
Power Supply Zalman ZM-600HP modular 600watt
Software Windows 7
@Benetanegia OK NV fanboy. :roll: GF114 is an update on the GF104. Like you've described, the GF104 is has some bits fused whereas GF114 is the full blown chip.

So when you said "GF104 handily beats GTX285", what you really means is GF114 beats GTX285. I am using GF114 in my computer and is a good card. However, I'll never call my card a GF104.

Since you can't even get a simple task of getting the numbers (a simple difference between 1 and 0) correctly, what makes your fantasy of Kepler vs Tahiti speculation believable/convicing?
 

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,763 (1.77/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI GTX 1080 Ti Gaming X
Storage 3x SSDs 2x HDDs
Display(s) Dell U2412M + Samsung TA350
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
Nvidia's high end will be around 45-55% faster then the GTX 580

Nvidia will be faster but its going to be the exact same situation we have seen time and again


HD 6970 at launch was around $370

GTX 580 at launch was around $500

$130 price difference

6970 to 7970 is around 40% performance difference
GTX 580 to 680 is expect to be 45-55%

this means essentially the difference we saw between 6970 and GTX580 aka 15%

is about the same difference we will see between a GTX 680 and HD 7970

Nvidia will be faster by around 15% and charge $100 premium for the performance difference,
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
@Benetanegia OK NV fanboy. :roll: GF114 is an update on the GF104. Like you've described, the GF104 is has some bits fused whereas GF114 is the full blown chip.

So when you said "GF104 handily beats GTX285", what you really means is GF114 beats GTX285. I am using GF114 in my computer and is a good card. However, I'll never call my card a GF104.

Since you can't even get a simple task of getting the numbers (a simple difference between 1 and 0) correctly, what makes your fantasy of Kepler vs Tahiti speculation believable/convicing?

What you don't get, insignificant offending boy, is that what matters is not what Nvidia really released back then, but what they wanted to release, what they aimed for. They released GTX480, but they wanted, designed and engineered for GTX580. They failed and we all know that story. Will they fail now? NO. Not according to the info everywhere (even Demerjian). So the fact is that in the past Nvidia always aimed at 80% performance increase, and in the last generation with the second try, they nailed it. This time they aimed for the same (it's obvious on the specs) and they got it right at the first time, plain and simple.

The specs are out and the number of SPs is not 480 (comparatively) like it was with GTX480, and clocks are not 600-700 Mhz. They didn¡t fail to meet their goals. Specs are 1536 SPs / 950 Mhz and not 1440 SP / 800 Mhz or something lilke that. They got what they wanted and they aimed for 100% imrovement, minus x% for innefficiencies.

Your point has been wrong all the time. The fact is they doubled up the number of SPs per SM, from 48 up to 96. If the resulting 2.9 Tflops chip was going to be just as fast as the 1.5 Tflops chip, they would have designed it for 1.45 Tflops in the "old fashion", I mean, they woudn't have doubled up the SP count and die size like that. They would have put 768 "Fermi-like" SP and be done with it.

Keep calling me fanboy, please one more time at least. I enjoy it, because you are so wrong and you so desperately (and wrongly) think that it makes your point any more valid. :laugh:
 
Joined
Mar 24, 2011
Messages
2,356 (0.49/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
Nvidia's high end will be around 45-55% faster then the GTX 580

Nvidia will be faster but its going to be the exact same situation we have seen time and again


HD 6970 at launch was around $370

GTX 580 at launch was around $500

$130 price difference

6970 to 7970 is around 40% performance difference
GTX 580 to 680 is expect to be 45-55%

this means essentially the difference we saw between 6970 and GTX580 aka 15%

is about the same difference we will see between a GTX 680 and HD 7970

Nvidia will be faster by around 15% and charge $100 premium for the performance difference,

You're also assuming Nvidia will go above AMD's pricing scheme. I think Nvidia's going to go under it. Let's face it, the saving grace for AMD cards is their price, but with the 7xxx series, so far even that isn't amazing, Nvidia could easily drop the prices on their current offerings and match AMD while still turning a substantial profit. If Nvidia markets a card with equivalent performance to the HD7950 for like $300, they would crush AMD in the first few weeks of sales. Id they kept their flagship card around $500-600, with a lower model around $450, they would be in position to just devour AMD's sales, or force AMD to restructure their entire pricing scheme, which would still take time and result in lost sales.

AMD already probably lost out on the fact that their cards have been--and were more so at launch--in low supply. Nvidia has had several extra weeks, going on months, to stock up, so they will be able to launch a whole line of cards, in high supply, that could potentially offer better or at least comperable performance. This is all speculation, but Nvidia from my perspective seem to be in a very good spot.
 

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,763 (1.77/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI GTX 1080 Ti Gaming X
Storage 3x SSDs 2x HDDs
Display(s) Dell U2412M + Samsung TA350
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
GTX 680 is priced around $600-670 they have the performance crown its not unheard up

the US $ is worth less then it used to be, high end Nvidia cards have cost this before $8800 GTX,

GTX 280 launched at $650

GTX 480 launched at $500


The 7970 is fast the 680 will be faster, 670 will be priced the same as the 7970 and offer the same performance,


there is no sales to lose or gain this is the exact same situation as the

GTX 400 series vs HD 5000 and GTX 500 vs HD 6000 in terms of performance differences and prices difference, but thats all im really at liberty to say.


just look back at previous launches its always the same, these last few years AMD launches first Nvidia follows, Nvidia retakes single GPU crown but also costs more, thats just the way it goes.


look at the GTX 570 vs HD 6970, 6970 costs a tiny bit more in the begining but also won in the majority of benchmarks, in the end they were equal prices averaged out,

GTX 670 vs 7970 will be the same situations as 570 vs 6970 Nvidias 680 will take the performance crown,

Hell a GTX 480 is only on average 50% faster then a GTX 280, in most games,

this with what info I have appears to be the same difference between a GTX 580 and 680 around 50% avg delta, some get as high as 80% but the average is 45-55% in general performance

The difference you see below between a 280 and a 580 is what we will see between a 580 and a 680
 
Last edited:
Joined
Mar 24, 2011
Messages
2,356 (0.49/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
The price on thier high-end cards is trending down, and has been since the 8800. Lets look at Nvidia's highest performing single cards launch prices;

8800GTX - $650
9800GTX - $350
GTX280 - $650
GTX480 - $500
GTX580 - $500

Compared to AMD\ATi's launch prices;

HD3870 - $240
HD4870 - $300
HD5870 - $400
HD6970 - $370
HD7970 - $550

It seems like both companies are just working towards the $500-550 flagship price point. Aside from the 9800GTX which dipped--because it was basically an 8800GTX on a lower stepping--they have continued a trend of dropping the price of their highest performing single card (not counting post lineup releases like the 285 and 8800 Ultra, or Dual-GPU cards). AMD\ATi on the other hand, have steadily increased the price.

I'm thinking Nvidia will launch a GTX680 around $550, a GTX670 around $450, and a GTX660Ti around $350. The 670 will probably handily beat the HD7970, with the 660Ti coming close to it. Obviously this is just speculation, but I'm not just throwing numbers out, it would put it in line with most of the rumors and the pricing structure Nvidia currently uses.
 

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,763 (1.77/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI GTX 1080 Ti Gaming X
Storage 3x SSDs 2x HDDs
Display(s) Dell U2412M + Samsung TA350
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
wrong your forgetting the 580 3gb which is in fact Nvidias highest end single gpu try again ;)

GTX 580 3GB was $600+ at launch

your also forgetting the 8800GTX ULTRA which was $700 at launch


you can discount them if you like compared to the mainstream top single gpu, but in terms of SINGLE GPU SKU, nvidia hasnt been dropping price what they have done however is offer better value at the typical top end,

8800 Ultra - $800+
9800GTX - $350
GTX280 - $650
GTX480 - $500
GTX580 3GB - $600+

Nvidia's pricing is more consistant, AMD prices have gone up because they can now compete with Nvidia on even footing most of the time,

Compared to AMD\ATi's launch prices;
HD2900 - $400 - could not compete with nvidia
HD3870 - $240 - far cheaper then the 2900 series that came before performance was the same, didnt compete well
HD4870 - $300 - more competitive good price point started gaining market share, still behind on performance, but was good value
HD5870 - $400 - strategy change, launched first with DX11, no competition in the market took a chunk of market share,
HD6970 - $370 - fouled up release date, Nvidia countered before AMD could release, meaning GTX 480 it came out on par with but Nvidia retook the crown with the 580 1.5gb and 3gb models
HD7970 - $550 - again AMD release first, offers more performance, Nvidia will counter with a faster chip that costs more, common sense from data presented over time would make this the logical outcome.

Nvidia will Launch a GTX 680 that like the 580 vs 6970 and 5870 vs GTX 480 before it costs more but is also faster, thats about what it comes down to, and you can say what you like about AMDs prices but if there so damn bad why are most of the e-tailers people like to deal with are sold out and scrambling to get more stock, whats more is AMD is getting more fab time then Nvidia currently :toast:
 
Last edited:
Top