Friday, February 10th 2012

NVIDIA GeForce Kepler Packs Radically Different Number Crunching Machinery

NVIDIA is bound to kickstart its competitive graphics processor lineup to AMD's Southern Islands Radeon HD 7000 series with GeForce Kepler 104 (GK104). We are learning through reliable sources that NVIDIA will implement a radically different design (by NVIDIA's standards anyway) for its CUDA core machinery, while retaining the basic hierarchy of components in its GPU similar to Fermi. The new design would ensure greater parallelism. The latest version of GK104's specifications looks like this:

SIMD Hierarchy
  • 4 Graphics Processing Clusters (GPC)
  • 4 Streaming Multiprocessors (SM) per GPC = 16 SM
  • 96 Stream Processors (SP) per SM = 1536 CUDA cores


TMU / Geometry Domain
  • 8 Texture Units (TMU) per SM = 128 TMUs
  • 32 Raster OPeration Units (ROPs)
Memory
  • 256-bit wide GDDR5 memory interface
  • 2048 MB (2 GB) memory amount standard
Clocks/Other
  • 950 MHz core/CUDA core (no hot-clocks)
  • 1250 MHz actual (5.00 GHz effective) memory, 160 GB/s memory bandwidth
  • 2.9 TFLOP/s single-precision floating point compute power
  • 486 GFLOP/s double-precision floating point compute power
  • Estimated die-area 340mm²
Source: 3DCenter.org
Add your own comment

139 Comments on NVIDIA GeForce Kepler Packs Radically Different Number Crunching Machinery

#1
Xaser04
jamsbong said:

@Xaser04 no need to struggle. Just read what I've posted thoroughly and comprehend it before venting off more steam.
How do I vent off more steam when I havn't vented any in the first place?

What is there to comprehend? Benetanegia replies to your posts with a well thought out reply and starting with post #64 you basically do nothing more than call him a fanboy.
Posted on Reply
#2
sergionography
Benetanegia said:
It makes no difference really and cards are still made oflots of small power of two chunks. The 384 bit memory controler is really 6 x 64 bit memory controlers each controling one memory module so there's your power of 2. Shaders both in AMD and NVidia architecture are composed of 16 shader wide arrays, SIMDs, which is what really does the hard and fundaental work, so power of 2 again, TMUs and ROPs are typically clustered in groups of 4 or 8... but really it makes no real difference. It's like that for convenience, until I hear the opposite. Rendering typically works on quads of pixels, 2x2 or 4x4 so that's why they tend to make it that way for GPUs. Other than that there's no reason that I know of.
ok i just came across these older posts in the forum while going through it, and there is a few things to note or take into consideration regarding the 256bit memory controller

1-gk104 was meant to be the mid/high range card and not the high end

2-usually cards like the gk104 might end up in the mobile segment therefore must be built with both worlds in mind(tho im not sure whether nvidia uses the second fastest desktop chip for mobile or the third fastest so correct me if im wrong)

3- considering the fact it was designed to be the med/high there are 2 reasons on why nvidia would choose 256bit, the first reason is to purposely limit the performance to allow for faster and more expensive cards to be released later or to place the card in the preferred place they desire in the market in terms of performance and price (similar to what they did with gtx460 768mb and gtx460 1gb) the second explanation could be that the card doesnt benefit from more bandwidth and would only make it more expensive for minor gains.


4- we dont know whether nvidia will call the gk104 gtx660 or 680, and if they do call it gtx680 is it because they failed to release gk110 due to yield issues or is it because the gk104 is sufficient to compete?

if it does end up to be gtx680 it would probably be the first time nvidia would have smaller die sizes than amd, tho after all the issues they had with fermi and manufacturing such a change in methodology is not all that shocking

but overall nvidia seems to have learned alot from amds strength(i wish amd would do the same from nvidia) as amd(the graphics division strictly) had more experience with facing manufacturing difficulties and knowing what to expect, and used to work according to due dates,i read an article by one of the chief engineers at ati where he explains how things work at ati(i will try to find the article and post it up, it was about rv670 and how amd jumped the wagon first for 55nm and how the process of releasing products works)
Posted on Reply
#3
RigRebel
NC37 said:
The end of NV's monolithic GPU era is at hand...was about to say...Bout freaken time! ATI was slower at first when they switched but I knew eventually NV would have to change too.

Very interested to see how well NV does at ATI's own game.
You have it twisted... Nvidia came out with the Fermi architecture and cuda cores (which drastically changed Dx11 gaming and Tesallations) while AMD was still playing in yester year with the older VLIW4 architecture... It was AMD that copied Nvidia's ground breaking Fermi architecture and called it GCN... Nvidia came out with the multistreaming multicore fermi design first way before AMDs GCN > http://www.nvidia.com/object/fermi_architecture.html?ClickID=azzwwsat59szk05t0aa0sll0zsrttknlzsks and over a year later > http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review Notice this review states AMD was using older single line VLIW4 architecture prior to the 7970 while Nvidia was using the multicore and multistreaming Fermi architecture. So you have it backwards... Nvidia revolutionized the GPU with Fermi then AMD copied it to catch up and CHANGED THEIR chip and called it GCN.

Don't get it twisted... It's Nvidia's design that AMD copied, tweeked it, called GCN and then said "look we win" lol .... Or you could say Nvidia came out with the Big Mac and over one year later AMD came out with Le' Big Mic! lol Now suddenly, all the AMD/ATI noobs that don't know the history or any better say "look a new sandwich, we win!"... lol It's Nvidia's game... always has been and always will be. AMD is just playing at it. And to prove it, the 700 series is gonna come out and steal AMDs thunder (again) till they catch up ANOTHER year or 2 later. :bounce:
Posted on Reply
#4
RigRebel
crazyeyesreaper said:
at this point who gives a flying fuck? i could care less if the Nvidia Kepler GPU is Oscar the Grouch doing calculations a on a Ti-82. Kepler is coming but its not here yet, so in retrospect it dosent matter, what its transistor count is, what its shader design is etc, because looking at specs dosent give us actual performance numbers in terms of what its capable of,

Nothing matters till we see reviews, i dont care what kepler has in the wings its still smoke and mirrors, even then its hog wash if we go on specs and theoretical maximum calculations AMD has won every time in terms of theoretical output, yet it dosent actually win, so lets just save the arguments for when we see real performance numbers, then we can bitch moan and complain about whos the greatest EVAR! and whos a loser.
LOL ... AMD copied the Fermi and called it GCN... read my previous post and get a clue. Nvidia created the multicore multistreaming Fermi architecture WAYYYY before AMD copied it and called it GCN and said "we win" lol. Better get your fact straights. If AMD is "currently" wining on computations it's cause they ditched their looser single stream VLIW4 6000 series architecture and tweeked on Nvidia's 500 series Fermi design and called it GCN... You want benches?... Look at the 500 series that tore AMD a new one on Dx11 and tessallations so bad that AMD had to change/copy to a similar architecture that "magically, low and behold" incorporates a design for better tessallations and Dx11 FPS. LOL DUH they had to to catch up! Too bad they still couldn't keep Dx9 at a decent lvl like the 500series did. Why couldn't they ? Because they didn't invent the technology they just stole it and they don't know wtf they are doing! lol :banghead: And all these noobs (that are acting like the 700 series is just a myth trying to keep up with AMD 7000 series) have it so twisted it's not even funny. :banghead: AMD came out with the 7000 series just to catch up to Nvidia's Fermi. The soon to be released Nvidia 700 series is a redesign of the currently exsisting 500 series Fermi architecture and has probably been in design way before the GCN since the 600 series in Mobile is already out. You AMD noobs should get your GPU history right before you post...

The ONLY time ATI/AMD had any real self-ingenutiy advantage over NVIDIA was way back when Doom3 (@2004 over 8 years ago) came out because AMD had onboard decoding and encoding and Nvidia's driver based codeing had a serious problem whith DOOM3 when it first came out. That small 6 month to a year hic-up was AMD's ONLY shinning moment over Nvidia and Nvidia has been knocking ATI/AMD in the dirt ever since. AMD is still trying to catch up and you're just catching one hop of the leap frog that happens to be AMD's turn... But again, the only jump forward AMD is making is that they got smart, ditched the single stream VLIW4 6000 series architecture and copied to a similar multistream/reading Fermi style architecture. It's just clever marketing that they are describing their newest chip design in reviews like it's revolutionary... well it WAS revolutionary... over a year ago when Nvidia CREATED IT and called it FERMI!

http://www.nvidia.com/object/fermi_architecture.html?ClickID=azzwwsat59szk05t0aa0sll0zsrttknlzsks ( http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review ( Don't the GNC core pictures (page 3) for the Radeon 7970 look familar ?? lol They should, similar architecture profile pictures were in the previous link's FERMI architecture press release over a year ago! lol The GCN architecture = a.k.a Le' Big Mic lol - over a year later lol

... so stick those theorectical copy cat numbers up your GPU. lol I doubt you'd believe anything but the crud AMD Radeon is selling you (cause they're second best) anyways. People are so ignorantly sold on the "drama" that Nvidia is the big selfish Giant and AMD the independent hero that people don't actually research or know the facts... If it weren't for the Fermi architecture with better Dx11 FPS, better Tessallations and multi-reading streams, we wouldn't have the FPS or the Dx11 Games that we do. Hardware always spurrs on new games and software that take advantage of the hardware technology and Nvidia's Fermi architecture spurred Dx11 and tessallations GREATLY in games like Skyrm, BF3, Batman AC, Witcher 2 and more ... you should be thanking them, honestly. I know AMD secretly is because without coping FERMI AMD would be AM-DONE. lol rolfloflmaoftwpwn!
Posted on Reply
#5
sergionography
RigRebel said:
LOL ... AMD copied the Fermi and called it GCN... read my previous post and get a clue. Nvidia created the multicore multistreaming Fermi architecture WAYYYY before AMD copied it and called it GCN and said "we win" lol. Better get your fact straights. If AMD is "currently" wining on computations it's cause they ditched their looser single stream VLIW4 6000 series architecture and tweeked on Nvidia's 500 series Fermi design and called it GCN... You want benches?... Look at the 500 series that tore AMD a new one on Dx11 and tessallations so bad that AMD had to change/copy to a similar architecture that "magically, low and behold" incorporates a design for better tessallations and Dx11 FPS. LOL DUH they had to to catch up! Too bad they still couldn't keep Dx9 at a decent lvl like the 500series did. Why couldn't they ? Because they didn't invent the technology they just stole it and they don't know wtf they are doing! lol :banghead: And all these noobs (that are acting like the 700 series is just a myth trying to keep up with AMD 7000 series) have it so twisted it's not even funny. :banghead: AMD came out with the 7000 series just to catch up to Nvidia's Fermi. The soon to be released Nvidia 700 series is a redesign of the currently exsisting 500 series Fermi architecture and has probably been in design way before the GCN since the 600 series in Mobile is already out. You AMD noobs should get your GPU history right before you post...

The ONLY time ATI/AMD had any real self-ingenutiy advantage over NVIDIA was way back when Doom3 (@2004 over 8 years ago) came out because AMD had onboard decoding and encoding and Nvidia's driver based codeing had a serious problem whith DOOM3 when it first came out. That small 6 month to a year hic-up was AMD's ONLY shinning moment over Nvidia and Nvidia has been knocking ATI/AMD in the dirt ever since. AMD is still trying to catch up and you're just catching one hop of the leap frog that happens to be AMD's turn... But again, the only jump forward AMD is making is that they got smart, ditched the single stream VLIW4 6000 series architecture and copied to a similar multistream/reading Fermi style architecture. It's just clever marketing that they are describing their newest chip design in reviews like it's revolutionary... well it WAS revolutionary... over a year ago when Nvidia CREATED IT and called it FERMI!

http://www.nvidia.com/object/fermi_architecture.html?ClickID=azzwwsat59szk05t0aa0sll0zsrttknlzsks ( http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review ( Don't the GNC core pictures (page 3) for the Radeon 7970 look familar ?? lol They should, similar architecture profile pictures were in the previous link's FERMI architecture press release over a year ago! lol The GCN architecture = a.k.a Le' Big Mic lol - over a year later lol

... so stick those theorectical copy cat numbers up your GPU. lol I doubt you'd believe anything but the crud AMD Radeon is selling you (cause they're second best) anyways. People are so ignorantly sold on the "drama" that Nvidia is the big selfish Giant and AMD the independent hero that people don't actually research or know the facts... If it weren't for the Fermi architecture with better Dx11 FPS, better Tessallations and multi-reading streams, we wouldn't have the FPS or the Dx11 Games that we do. Hardware always spurrs on new games and software that take advantage of the hardware technology and Nvidia's Fermi architecture spurred Dx11 and tessallations GREATLY in games like Skyrm, BF3, Batman AC, Witcher 2 and more ... you should be thanking them, honestly. I know AMD secretly is because without coping FERMI AMD would be AM-DONE. lol rolfloflmaoftwpwn!
umm you need to chill lol, we know very well how gtx 480 and gtx 470 went lol, even the 500 series while fast they were never the efficient, yes gtx 580 outperformed a 6970 by about 20% but it had a die size of 550mm2
AMD could e easily built a die that big nbfit more transisters and would still match gtx580 and at the same power consumption or even lower, if you truly did ur research you would know that, but I agree Fermi did introduce revolutionary tech in the same way bulldozer did, tech that will only mature in time but wasn't all refine at release
Posted on Reply
#6
RigRebel
sergionography said:
umm you need to chill lol, we know very well how gtx 480 and gtx 470 went lol, even the 500 series while fast they were never the efficient, yes gtx 580 outperformed a 6970 by about 20% but it had a die size of 550mm2
AMD could e easily built a die that big nbfit more transisters and would still match gtx580 and at the same power consumption or even lower, if you truly did ur research you would know that, but I agree Fermi did introduce revolutionary tech in the same way bulldozer did, tech that will only mature in time but wasn't all refine at release
I know full well about the GTX 480 delays and bugs and revisions and both the Gtx 480 Gtx 470 barely performing on exact par with ATI... but nice of you to presume I don't just because you do lol :shadedshu .. If I were you I'd calm down and read everything carefully before I comment. lol... did I not state history and info from way back in 2004? Wouldn't that preceed the GTx 470 ??? lol. I didn't comment on that series because that wasn't the point... Just because I didn't comment about that series does nothing to state I don't know about it lol especially when it wasn't really pertinent to the subject because rest assured I'm sure Nvidia is aware of it's past mistakes and they will probably have no baring on the 700 series IMO... But thanks for bringing it up :)

I'm also very sure that based on the fact that nvidia pioneered the fermi architecture and that it's still fairly new and fresh that it has a long way to go and more possibilities to hit because what's the sense of creating a whole new architechture if it doesn't have headroom to accomidate the next several years ? And is that not an architecture that Nvidia created and AMD is merely copy catting one step at a time?... I'm sure that Nvidia has years of plans for Fermi and limitless possibilites THEY CREATED IT.. I'm sure they know full well all it's avenus and applications; that it's leaps and bounds beyond AMD in a similar architecture. That the 700 series will infact crush the current wannabee attempt from AMD because currently (in the $200-$300.00 market where the real market war is won and lost) AMDs barely released 7850 had referrence clock benches BARELY beating the Nvidia's 560 Ti (which has been a phenomenol price point card!) in Dx11 and failed misserably in Dx9 which Nvidia still does well. Not to mention, that the 7850 has 1gib more than the 560 Ti and the reference board only performed marginly better than the referrence 560 Ti. Now what's the big deal with Dx9? Not much really except the actually the most demading game for video rendering and drawing right now is actually SC II in 4v4 mode which is Dx9 and the 560 ti still killed the 7850 by 20FPS. So AMD is releaseing a barely better than version that does worse on the old stuff ? noooicccce. Again, no fear whatsoever that Nvidia is going to knock AMD back in the dirt!

Do you work in any technology manufacturing field ? I do, I work with years apon years of laser technology. I know what stages are involved in many avenues of product development from design to deploy to da money! I know that something you see tomorrow from Nvidia was probably on a drawing board 2-3 years before you saw it, that they probably had a mock up model 1 year ago and a functioning test model atleast 6months to 1 year ago...lol And I certainly know how to do my homework but thanks for presuming I don't just because you do and I didn't mention something off topic. Learn not to presume so much about what's not said lol :nutkick:


PS... I've even done field trips lol .. I've been (on more than one occasion) to one of ATI's engineering facilities and talked first hand with PCB, CPU and RAM engineers on Radeon cards AND have worked for a major gaming company's coporate center and have worked with unrestricted models of the Nvidia 8000 series... have you ? lol

Pss.. Bulldozer is a joke and an extreme dissapointment at the time to any true gamer with half the sense to read benchmarks because single threading was horrible and @98% of the games out there are single threaded....Windows had to come out with a patch just to set it right. Plus, all they did was create hyperthreading pipes so large you could fit a truck through them. Lol a lot of good that did for gaming (which is pertinent to this discussion because you mentioned a gaming graphics card and the Bulldozer in the same paragraph pressumably as having linked innovation) because the only games I know of for hyperthreading are Civ V and Oblivion... PLUS, That's like putting 22's on a Bonneville and calling it badass and innovative! rofl And the Bulldozers still didn't top i7s; and, the first batch of FX 6100 that came to my local CompUSA were all BAD and wouldn't work with the 990Fx chipsets lol. I stood there watching as my good tech friend behind the counter put every FX6100 chip they had left on a chip tester/meter and one after another were bad! This was only after 2 different customers came in to swap FX6100s they just bought 2-3 times over! If you're going to use an example like that then YOU should deffenitely DO YOUR RESEARCH and maybe a little real-world expierence first! LoL And, pick a better example lol :nutkick: :nutkick: :nutkick: And before you start with the 2nd gen Iseries B2-B3 problem let me stop you there. That was every bit a Motherboard bridge problem effecting Sata III and affected motherboards and MBoard manufactures not the fault of the CPU itself that they didn't get the chipset right.

So to recap: before you start a rebuttle and start spitting out stuff you clearly need to reasearch or look at yourself better
1. make sure your points are on topic or relevant because last series 480s and 470s clearly are not.
2. Make sure you pick better examples than the Bulldozer LOL fail
and finally 3. Make sure you read clearer, do your own research and know who you're talking to and what their expierence is before you open your crap trap and give advice to anyone to do homework because mine is way done and you've been taken to school :) lol :nutkick: :)
Posted on Reply
#7
sergionography
RigRebel said:
I know full well about the GTX 480 delays and bugs and revisions and both the Gtx 480 Gtx 470 barely performing on exact par with ATI... but nice of you to presume I don't just because you do lol :shadedshu .. If I were you I'd calm down and read everything carefully before I comment. lol... did I not state history and info from way back in 2004? Wouldn't that preceed the GTx 470 ??? lol. I didn't comment on that series because that wasn't the point... Just because I didn't comment about that series does nothing to state I don't know about it lol especially when it wasn't really pertinent to the subject because rest assured I'm sure Nvidia is aware of it's past mistakes and they will probably have no baring on the 700 series IMO... But thanks for bringing it up :)

I'm also very sure that based on the fact that nvidia pioneered the fermi architecture and that it's still fairly new and fresh that it has a long way to go and more possibilities to hit because what's the sense of creating a whole new architechture if it doesn't have headroom to accomidate the next several years ? And is that not an architecture that Nvidia created and AMD is merely copy catting one step at a time?... I'm sure that Nvidia has years of plans for Fermi and limitless possibilites THEY CREATED IT.. I'm sure they know full well all it's avenus and applications; that it's leaps and bounds beyond AMD in a similar architecture. That the 700 series will infact crush the current wannabee attempt from AMD because currently (in the $200-$300.00 market where the real market war is won and lost) AMDs barely released 7850 had referrence clock benches BARELY beating the Nvidia's 560 Ti (which has been a phenomenol price point card!) in Dx11 and failed misserably in Dx9 which Nvidia still does well. Not to mention, that the 7850 has 1gib more than the 560 Ti and the reference board only performed marginly better than the referrence 560 Ti. Now what's the big deal with Dx9? Not much really except the actually the most demading game for video rendering and drawing right now is actually SC II in 4v4 mode which is Dx9 and the 560 ti still killed the 7850 by 20FPS. So AMD is releaseing a barely better than version that does worse on the old stuff ? noooicccce. Again, no fear whatsoever that Nvidia is going to knock AMD back in the dirt!

Do you work in any technology manufacturing field ? I do, I work with years apon years of laser technology. I know what stages are involved in many avenues of product development from design to deploy to da money! I know that something you see tomorrow from Nvidia was probably on a drawing board 2-3 years before you saw it, that they probably had a mock up model 1 year ago and a functioning test model atleast 6months to 1 year ago...lol And I certainly know how to do my homework but thanks for presuming I don't just because you do and I didn't mention something off topic. Learn not to presume so much about what's not said lol :nutkick:


PS... I've even done field trips lol .. I've been (on more than one occasion) to one of ATI's engineering facilities and talked first hand with PCB, CPU and RAM engineers on Radeon cards AND have worked for a major gaming company's coporate center and have worked with unrestricted models of the Nvidia 8000 series... have you ? lol

Pss.. Bulldozer is a joke and an extreme dissapointment at the time to any true gamer with half the sense to read benchmarks because single threading was horrible and @98% of the games out there are single threaded....Windows had to come out with a patch just to set it right. Plus, all they did was create hyperthreading pipes so large you could fit a truck through them. Lol a lot of good that did for gaming (which is pertinent to this discussion because you mentioned a gaming graphics card and the Bulldozer in the same paragraph pressumably as having linked innovation) because the only games I know of for hyperthreading are Civ V and Oblivion... PLUS, That's like putting 22's on a Bonneville and calling it badass and innovative! rofl And the Bulldozers still didn't top i7s; and, the first batch of FX 6100 that came to my local CompUSA were all BAD and wouldn't work with the 990Fx chipsets lol. I stood there watching as my good tech friend behind the counter put every FX6100 chip they had left on a chip tester/meter and one after another were bad! This was only after 2 different customers came in to swap FX6100s they just bought 2-3 times over! If you're going to use an example like that then YOU should deffenitely DO YOUR RESEARCH and maybe a little real-world expierence first! LoL And, pick a better example lol :nutkick: :nutkick: :nutkick: And before you start with the 2nd gen Iseries B2-B3 problem let me stop you there. That was every bit a Motherboard bridge problem effecting Sata III and affected motherboards and MBoard manufactures not the fault of the CPU itself that they didn't get the chipset right.

So to recap: before you start a rebuttle and start spitting out stuff you clearly need to reasearch or look at yourself better
1. make sure your points are on topic or relevant because last series 480s and 470s clearly are not.
2. Make sure you pick better examples than the Bulldozer LOL fail
and finally 3. Make sure you read clearer, do your own research and know who you're talking to and what their expierence is before you open your crap trap and give advice to anyone to do homework because mine is way done and you've been taken to school :) lol :nutkick: :)
if fermi is the topic then gtx 480 is very much relevant since it IS fermi LOL.
vliw has the edge in graphical tasks, Fermi has more compute capabilities, AMD. This round improved compute while keeping the edge in graphical tasks, that's what and does, as for copying, um NVIDIA was all about big cores with hot clocks, now I can say the same thing that NVIDIA copied and by dropping not clocks and fitting more cores but I'm not gonna get down to that level, idk what on earth males GCN anything like Fermi lmao,like its a freaking 2000core chip, NVIDIA never built an architecture like that until now sorta. As for dx9 one game doesn't tell it all, much more factors might be the cause but it doesn't matter really so I don't think I need to bring this point
As for bulldozer well AMD had yield issues which were apperant with llano too, NVIDIA had that issue with fermi, it almost always happens when moving to a new node, bulldozer is a revolutionary core in theory, AMD. Just failed to deliver this time around same way fermi failed with gtx400, this is y I mentioned it, now if ur opinion sais otherwise then good for u, cuz I simply happen to disagree as I've done enough research on the matter
Posted on Reply
#8
Steevo
VLIW raped Nvidias designs for years.

Nvidia came out with a idea and design that was late, hot and badly implemented, they refined it and won.

AMD was building the same for many years, a GPU doesn't happen in a few months time, they saw the writing on the wall that computing was becoming more generalized and with the purchase of ATI they managed to get a very very good foothold in this area.

Nvidia is countering their early inefficiency with experience brought from their mobile departments. A smaller more efficient cip that uses tech brought from more power efficient designed Tegra chips.


Bulldozer belongs nowhere in this discussion. Back on topic.
Posted on Reply
#9
Aquinus
Resident Wat-man
I'm trying to understand people when they say that the new GTX 680 isn't supposed to be their fsater model for Kepler. If that is so, why is its product placement so high. The 690 is still supposed to be nVidia's dual GPU solution, right? That doesn't give a whole lot of room to put a successor in this generation. Also who cares if about nVidia vs AMD, when the facts come push to shove, the 7970 has been out for how many months and how many users have bought 7000-series cards? That is the real point, Kepler was coming so late that something had to be released. It sounds like the bulldozer that didn't completely flop (rather just lose ground in comparison to the headroom previous top-end models had.)
Posted on Reply
#10
RigRebel
sergionography said:
if fermi is the topic then gtx 480 is very much relevant since it IS fermi LOL.
vliw has the edge in graphical tasks, Fermi has more compute capabilities, AMD. This round improved compute while keeping the edge in graphical tasks, that's what and does, as for copying, um NVIDIA was all about big cores with hot clocks, now I can say the same thing that NVIDIA copied and by dropping not clocks and fitting more cores but I'm not gonna get down to that level, idk what on earth males GCN anything like Fermi lmao,like its a freaking 2000core chip, NVIDIA never built an architecture like that until now sorta. As for dx9 one game doesn't tell it all, much more factors might be the cause but it doesn't matter really so I don't think I need to bring this point
As for bulldozer well AMD had yield issues which were apperant with llano too, NVIDIA had that issue with fermi, it almost always happens when moving to a new node, bulldozer is a revolutionary core in theory, AMD. Just failed to deliver this time around same way fermi failed with gtx400, this is y I mentioned it, now if ur opinion sais otherwise then good for u, cuz I simply happen to disagree as I've done enough research on the matter
GTX 480 does not have any relavance because 1. You're just trying desperately to use it in context as a past example how Nvidia can fail when NVIDIA has already GONE PAST the GTX 480 and created the 580 which was every bit a success! You're orginal post was based on the fact that we should hold our breath because what about the GTX 480 and GTX 470 but that was 2 generations ago and you're just looking for something to hold on to and make yourself look smart when actually you can't even stay on FING target! YOu're pulling the past out your ass that's already been the PAST and trying to equate that to Nvidia possibly failing this time when THEY HAVE ALREADY SUCCEEDED IN PASSING THE GTX 480 you retard!! thus the GTX480 = NOT RELAVENT ... get it now genius ? 2+2 stay on target !
Posted on Reply
#11
erocker
This thread has nothing to do with GTX 480, Fermi, etc. Old news, get over it. Stay on topic.
Posted on Reply
#12
erocker
Thread cleaned of off topic posts, after my warning. I won't ask again.
Posted on Reply
#13
phanbuey
Steevo said:
VLIW raped Nvidias designs for years.

Nvidia came out with a idea and design that was late, hot and badly implemented, they refined it and won.

AMD was building the same for many years, a GPU doesn't happen in a few months time, they saw the writing on the wall that computing was becoming more generalized and with the purchase of ATI they managed to get a very very good foothold in this area.

Nvidia is countering their early inefficiency with experience brought from their mobile departments. A smaller more efficient cip that uses tech brought from more power efficient designed Tegra chips.


Bulldozer belongs nowhere in this discussion. Back on topic.
Yes you're right... the 2900xt, then the 3870 that got its ass kicked, then the 4870 which was good for the price but was always second best.... oh yeah... the 6xxx series, which is also second best.

the only time nvidia's design "lost" was Fermi vs the 5xxx series, but then they still had performance crown, just too late to market. That's one design out of 5.

Where are you getting you info that this is tegra tech?
Posted on Reply
#14
RigRebel
Is the launch date for Kepler still 3/22 ? ... :toast::roll:

Steevo said:
VLIW raped Nvidias designs for years.

Nvidia came out with a idea and design that was late, hot and badly implemented, they refined it and won.

AMD was building the same for many years, a GPU doesn't happen in a few months time, they saw the writing on the wall that computing was becoming more generalized and with the purchase of ATI they managed to get a very very good foothold in this area.

Nvidia is countering their early inefficiency with experience brought from their mobile departments. A smaller more efficient cip that uses tech brought from more power efficient designed Tegra chips.


Bulldozer belongs nowhere in this discussion. Back on topic.
Sergi started with citing BDozer I just rebuttled about it lol.

phanbuey said:
Yes you're right... the 2900xt, then the 3870 that got its ass kicked, then the 4870 which was good for the price but was always second best.... oh yeah... the 6xxx series, which is also second best.

the only time nvidia's design "lost" was Fermi vs the 5xxx series, but then they still had performance crown, just too late to market. That's one design out of 5.

Where are you getting you info that this is tegra tech?
Ditto ... show sources for kepler bought on Tegra tech ? or is that just speculation ?

Aquinus said:
I'm trying to understand people when they say that the new GTX 680 isn't supposed to be their fsater model for Kepler. If that is so, why is its product placement so high. The 690 is still supposed to be nVidia's dual GPU solution, right? That doesn't give a whole lot of room to put a successor in this generation. Also who cares if about nVidia vs AMD, when the facts come push to shove, the 7970 has been out for how many months and how many users have bought 7000-series cards? That is the real point, Kepler was coming so late that something had to be released. It sounds like the bulldozer that didn't completely flop (rather just lose ground in comparison to the headroom previous top-end models had.)
Where are you getting your info? It's been barely 2months and 10 days...I'd hardly call that "so late". And as for how many people have bought, idk, I don't have the fiscial statments for 7970 purchases in the last two months... do you ? lol...

Actually 2.3333 months is hardly enought time to rest on. Kepler is presumably 3 days away. If Kepler owns then 2 months 10 days is barely on top long enough to call a victory for the 7970. It's more like treading water till the sharks arrive! lol Especially, since it takes way longer than 2months and 13 days to produce a series which means Nvidia has been working on 700s for a while and is only "fashionably" late... A.K.A while AMD is stuffing orderves and drinking champaigne in hollow victory... Nvidia's gonna show up via VIP private entrance and F$#! the Prom queen... lol -pimpslap- :pimp:
ps - perhaps you were thinking 6970 ? :confused:
Posted on Reply
#15
erocker
@RigRebel.. Stop double/triple posting. Use the edit button to add to your posts or use the multi-quote button to quote multiple posts.
Posted on Reply
#16
xenocide
Aquinus said:
I'm trying to understand people when they say that the new GTX 680 isn't supposed to be their fsater model for Kepler. If that is so, why is its product placement so high. The 690 is still supposed to be nVidia's dual GPU solution, right?
It's not because of the model number, it's because of the chip number (GK104). When Fermi was in development, it was GF104/GF114 (GTX460/560) for the Mid-Range model, and GF100/GF110 (GTX480/580) for the highest end model. Nvidia has used a similar naming scheme for their chips for going on a decade now, maybe longer.

This card is listed as GK104, which means it was originally designed to replace cards like the GTX460/560/560Ti. There was information suggesting there was a GK100 and even a GK110 in development, but that disappeared. Couple this with Nvidia PR people and CEO's saying they expected more from AMD, and it would appear that Nvidia just took their intended mid-range offering, tweaked it, and relabeled it a GTX680.

Just look at the specs for the GTX680 that we know, 2GB VRAM, 256-bit memory bus, and GK104 chip. It has all the markings of what should have been a GTX660. The actual model number is second to the chip
Posted on Reply
#17
RigRebel
erocker said:
@RigRebel.. Stop double/triple posting. Use the edit button to add to your posts or use the multi-quote button to quote multiple posts.
Sorry :( done.. :( Still getting the hang of this site.
Posted on Reply
#18
Aquinus
Resident Wat-man
xenocide said:
It's not because of the model number, it's because of the chip number (GK104). When Fermi was in development, it was GF104/GF114 (GTX460/560) for the Mid-Range model, and GF100/GF110 (GTX480/580) for the highest end model. Nvidia has used a similar naming scheme for their chips for going on a decade now, maybe longer.

This card is listed as GK104, which means it was originally designed to replace cards like the GTX460/560/560Ti. There was information suggesting there was a GK100 and even a GK110 in development, but that disappeared. Couple this with Nvidia PR people and CEO's saying they expected more from AMD, and it would appear that Nvidia just took their intended mid-range offering, tweaked it, and relabeled it a GTX680.

Just look at the specs for the GTX680 that we know, 2GB VRAM, 256-bit memory bus, and GK104 chip. It has all the markings of what should have been a GTX660. The actual model number is second to the chip
But who cares what the hardware model itself is? They placed it to compete directly with the 7970 just by naming it the GTX 680, so it's the top for Kepler unless nVidia is changing their naming scheme. If there is going to be this said second chip, then why wasn't this placed as the 670? You see my point? It may be quick but nVidia is making it look like they won't have it ready for this GPU line-up.

RigRebel said:
Where are you getting your info? It's been barely 2months and 10 days...I'd hardly call that "so late". And as for how many people have bought, idk, I don't have the fiscial statments for 7970 purchases in the last two months... do you ? lol...
What proof do I need? Look around and look at all the people with 7970s, and the simple fact that the 7970 has been out for two months and Kepler is nowhere to be seen. You can't say that doesn't benefit AMD.
Posted on Reply
#19
RigRebel
Aquinus said:
But who cares what the hardware model itself is? They placed it to compete directly with the 7970 just by naming it the GTX 680, so it's the top for Kepler unless nVidia is changing their naming scheme. If there is going to be this said second chip, then why wasn't this placed as the 670? You see my point? It may be quick but nVidia is making it look like they won't have it ready for this GPU line-up.



What proof do I need? Look around and look at all the people with 7970s, and the simple fact that the 7970 has been out for two months and Kepler is nowhere to be seen. You can't say that doesn't benefit AMD.
Look all around? lol so the little old lady picking her nose at the toll both has a 7970? If I look all around right now I just see my dog and I'm pretty sure he's not rocking out a 7970.. lol I think you're confusing your opinion and conjeture (however likely it may be) with data...poop some data please. and again I must educate an AMD fanboy... I'll provide a link later today which states it's more like the 660Ti will perform as the 580 did. More on that later link is on another PCs browser. peace.
Posted on Reply
#20
Aquinus
Resident Wat-man
RigRebel said:
Look all around? lol so the little old lady picking her nose at the toll both has a 7970? If I look all around right now I just see my dog and I'm pretty sure he's not rocking out a 7970..
Saying this is necessary why? Neither of these will buy a nVidia card either...

RigRebel said:
lol I think you're confusing your opinion and conjeture (however likely it may be) with data...poop some data please. and again I must educate an AMD fanboy... I'll provide a link later today which states it's more like the 660Ti will perform as the 580 did. More on that later link is on another PCs browser. peace.
That is kind of insulting... I'm an AMD fan boy because I own a Sandy Bridge-E? Because I've used nVidia cards in the past? Also what is with using words like "poop"? You couldn't think of a more mature and intellectual word to use? Kepler can't make money if it hasn't been released, the 7970 can make money because it is actually on the market and people are buying it, it has only been on the market for 2 months without a real competitor. I don't need numbers to show that you can't buy a product that hasn't been released yet, that is common sense and if you need numbers to be convinced of that, you should stop talking right now.

I would appreciate some maturity and some logical reasoning if you're going to start calling me an "AMD fanboy" which in itself is just being used to slur and discredit what I have to say which is insulting and uncalled for.

Also I'm a System Admin and I have a degree in computer science, what are you doing? Don't pander to me and try to tell me what I know and who I am.

Edit: As you're a new user, I highly recommend that you read the rules.
Posted on Reply
#21
Tatty_One
Super Moderator
The name calling and accusation throwing is starting to become tiresome, carry out your discussions/disagreements in a civil manner or I will start dishing out the points, a little bit of maturity goes a long way!
Posted on Reply
#22
TheMailMan78
Big Member
Aquinus said:
Also I'm a System Admin and I have a degree in computer science.
I'm an administrator also.....an admin of WINNING! And I have a PHD also.......Pimpin Hoes Degree.

Listen man I learned along time ago not to argue with newer members as it spins out of control fast as they are not familiar with the culture of TPU. They have "teething" issues for a lack of a better term. Relax man. Don't take it personal. I see where you are coming from with the market placement and I agree. However its all up in the air until we see some real benches from W1zz.

Edit: Just saw your a new member also. lol.
Posted on Reply
#23
jaredpace
Hahah, TPU is the coolest by far.
Posted on Reply
#24
Aquinus
Resident Wat-man
TheMailMan78 said:
However its all up in the air until we see some real benches from W1zz.
People don't seem to realize this... and I didn't take it personal, it was to make a point that you shouldn't insult people. It was a "you can't get away with that kind of behavior" post. Also I do get defensive when people call me out, it's a natural tendency. :ohwell:
Posted on Reply
#25
TheMailMan78
Big Member
Aquinus said:
People don't seem to realize this... and I didn't take it personal, it was to make a point that you shouldn't insult people. It was a "you can't get away with that kind of behavior" post. Also I do get defensive when people call me out, it's a natural tendency. :ohwell:
People call me out all the time. You have to consider the source. Personally I believe its their right to be wrong......and for me to laugh at them and poke fun at them without them realizing it. Remember this is just the internet and TPU mods do thier job well. Anyway....ALEMAN FORWARD TO TOPIC!
Posted on Reply
Add your own comment