1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ATI Believes GeForce GTX 200 Will be NVIDIA's Last Monolithic GPU.

Discussion in 'News' started by Polaris573, Jun 17, 2008.

  1. Megasty New Member

    Joined:
    Mar 18, 2008
    Messages:
    1,263 (0.54/day)
    Thanks Received:
    82
    Location:
    The Kingdom of Au
    I was thinking the same thing. Given the size of the present ATi chips, they could be combined & still retain a reasonbly sized die but the latency between the 'main' cache & 'sub' cache would be so high that they might as well leave them apart. It would be fine if they increase the bus but then you would end up with a power hungry monster. If the R800 is a multi-core then so be it but we gonna need a power plant for the thing if its not going to be just another experiment like the R600.
  2. imperialreign

    imperialreign New Member

    Joined:
    Jul 19, 2007
    Messages:
    7,043 (2.71/day)
    Thanks Received:
    909
    Location:
    Sector ZZ₉ Plural Z Alpha

    I see your point, and I slightly agree as well . . . but that's looking at current tech with current technology and current fabrication means.

    If AMD/ATI can develop a more sound fabrication process, or reduce the number of dead cores, it would make it viable, IMO.

    I'm just keeping in mind that over the last 6+ months, AMD has been making contact with some reputable companies who've helped them before, and have also taken on quite a few new personnel who are very well respected and amoungst the top of the line in their fields.

    The Fuzion itself is, IMO, a good starting point, and AMD proving to themselves they can do it. Integrating a GPU core like that wouldn't be resource friendly if their fabrication process left with a lot of dead fish in the barrel - they would be losing money just in trying to design such an architecture if fabrication would shoot themselves in the foot.

    Perhaps it's possible they've come up with a way to stitch two cores together where if one is dead from fabrication, it doesn't cripple the chip, and the GPU can be slapped on a lower end card and shipped. Can't really be sure right now, as AMD keeps throwing out one surprise after another . . . perhaps this will be the one they hit the home run with?
  3. [I.R.A]_FBi

    [I.R.A]_FBi New Member

    Joined:
    May 19, 2007
    Messages:
    7,664 (2.88/day)
    Thanks Received:
    540
    Location:
    c:\programs\kitteh.exe
    you guys sre making the green giant seem like a green dwarf.
  4. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.68/day)
    Thanks Received:
    184
    Well we don't know with certainty the number of transistors of RV770 but they are above 800 million so it would be more than 1600 for a dual core. That's more than the size of the GT200, but I don't think it would be a big problem.

    On the other hand, the problem with GT200 is not transistor count, but die size, the fact they have done it in 65 nm. In 55 nm the chip would probably be around 400 cm2 which is not that high really.

    Another problem when we compare GT200 size and the performance it delivers is that they have added those 16k caches in the shader processors where are not needed for any released game or benchmark. Applications will need to be programmed to use them. As it stands now GT200 has almost 0,5 MB of cache with zero benefit. 4MB of cache in Core2 are pretty much half the die size, in GT200 it's a lot less than that but a lot from a die size/gaming performance point of view. And to that you have to add L1 caches, that are probably double the size than on G92, with zero benefit again. It's here and in FP64 shaders where Nvidia has used a lot of silicon for future proofing the marchitecture, but we don't see the fruits yet.

    I think that on GPUs bigger single core chips is the key to performance and multi-GPU is the key to profitability once reached one point in the fab-process. The better result is probably something in the middle, I mean not going with more than two GPUs and keep making the chips bigger according to the fab-process capabilities. As I explained above I don't think multi-core GPUs have any advantage over bigger chips.

    That would open the door to both bigger chips and, as you say, multi-core chips. Again I don't see any advantage on multi-core GPUs.


    And what's the difference between that and what they do today? Well what Nvidia does today, as Ati is not doing that with RV670 and 770, but they did in the past.
  5. candle_86 New Member

    Joined:
    Dec 28, 2006
    Messages:
    3,916 (1.40/day)
    Thanks Received:
    233
    how do you get that it has 40 rops, the G92 has 16 even that discredits even the idea of dual G92 under there.
  6. tkpenalty New Member

    Joined:
    Sep 26, 2006
    Messages:
    6,958 (2.40/day)
    Thanks Received:
    345
    Location:
    Australia, Sydney
    Even the CEO of nvidia admitted that die shrinking will do shit all in terms of cooling, the effectiveness of a Die shrink from 65nm to 45nm is not that big for that many transistors.

    AMD Creating this "nvidia is a dinosaur" hype is, viable.

    If you have so much heat output on one single core, the cooling would be expensive to manufacture. 200W on one core, the cooling system would have to transfer the heat away ASAP. While, 2x100W cores, would fare better, with the heat output being spread out.

    Realise that a larger core means a far more delicate card with the chip itself requiring more BGA solder balls; means the card cannot take much stress before BGA solder balls falter.

    AMD is saying that, if they do what they are doing now, they will not need to completely redesign an architecture. It doesn't matter if they barely spend anything in R&D, in the end the consumer benefits from lower prices, we are the consumer remember.

    AMD can decide to stack two or even three cores, provided they make the whole card function as one GPU (instead of the HD3870X2 style's 2 cards on software/hardware level), if the performance and price is good.

    Just correcting you, 2 on one die is what we have atm anyway. GPUs are effectively a collection of processors in one die. AMD is trying not to put dies together as they know that die shrinks under 65~45nm do not really help in terms of heat output, and therefore are splitting the heat output. As I mentioned before, a larger die will mean more R&D effort, and more expensive to manufacture.
    Last edited: Jun 18, 2008
  7. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,436 (11.28/day)
    Thanks Received:
    13,620
    Location:
    Hyderabad, India
    At least NVidia came this far. ATI hit its limit way back with the R580+. The X1950 XTX was the last ''Mega Chip'' ATI made. Ofcourse the R600 was their next megachip but ended up being a cheeseburger.
  8. tkpenalty New Member

    Joined:
    Sep 26, 2006
    Messages:
    6,958 (2.40/day)
    Thanks Received:
    345
    Location:
    Australia, Sydney
    Instead of the word cheeseburger I think you should use something that tastes vile. Cheeseburgers are successful.
  9. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,436 (11.28/day)
    Thanks Received:
    13,620
    Location:
    Hyderabad, India
    by cheeseburger I was highlighting the word 'fattening', 'not as nutritious as it should be', 'unhealthy diet'. Popularity isn't indicative of a better product. ATI fans will continue to buy just anything they put up. Though I'm now beginning to admire the HD3870 X2.
  10. Nyte New Member

    Joined:
    Jan 11, 2005
    Messages:
    185 (0.05/day)
    Thanks Received:
    34
    Location:
    Toronto ON
    One still has to wonder though if NVIDIA has already thought ahead and designed a next-gen GPU with a next-gen architecture... just waiting for the right moment to unleash it.
  11. laszlo

    laszlo

    Joined:
    Jan 11, 2005
    Messages:
    891 (0.25/day)
    Thanks Received:
    105
    Location:
    66 feet from the ground


    The die size of gt200 is 576mm2 on 65nm so in 55nm 160000 mm2 ? :slap:
  12. aj28 New Member

    Joined:
    Jun 18, 2008
    Messages:
    352 (0.16/day)
    Thanks Received:
    35
    Not saying anything but umm... From my understanding anyway, die shrinks generally cause worse yields and a whole mess of manufacturing issues in the short run, depending of course upon the core being shrunk. Again, not an engineer or anything, but shrinking the GT200, being the behemoth that it is, will not likely be an easy task. Hell, if it were easy we'd have 45nm Phenoms by now, and Intel wouldn't bother with their 65nm line either now that they've already got the tech pretty well down. Correct me if I'm wrong...
  13. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.68/day)
    Thanks Received:
    184
    :slap::slap::slap::slap::slap::slap:
    Yeah I meant 400 mm2 :roll:
    :slap::slap::slap::slap::slap::slap:
  14. Voyager

    Joined:
    Jun 18, 2008
    Messages:
    23 (0.01/day)
    Thanks Received:
    2
  15. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,436 (11.28/day)
    Thanks Received:
    13,620
    Location:
    Hyderabad, India
  16. candle_86 New Member

    Joined:
    Dec 28, 2006
    Messages:
    3,916 (1.40/day)
    Thanks Received:
    233
    yea but raw power means diddly the R600 has twice the computational units yet lagged behind, i still await benchmarks
  17. Easy Rhino

    Easy Rhino Linux Advocate

    Joined:
    Nov 13, 2006
    Messages:
    13,412 (4.71/day)
    Thanks Received:
    3,234
    i love AMD, but come on. why would they go and say something like that? nvidia has proven time and again that they can put out awesome cards and make a ton of money doing it. meanwhile amds stock is in the toilet and they arent doing anything special to keep up with nvidia. given the past 2 years history between the 2 groups, who would you put your money on in this situation? the answer is nvidia.
  18. newconroer

    newconroer

    Joined:
    Jun 20, 2007
    Messages:
    3,029 (1.15/day)
    Thanks Received:
    301
    Even if the statement is true it still falls in Nvidia's favor either way.

    They have the resources to go 'smaller' if need be. ATi has less flexibility.
  19. tkpenalty New Member

    Joined:
    Sep 26, 2006
    Messages:
    6,958 (2.40/day)
    Thanks Received:
    345
    Location:
    Australia, Sydney
    LOL. That would make Zek cry :laugh:
  20. DanishDevil

    DanishDevil

    Joined:
    Oct 6, 2005
    Messages:
    10,203 (3.14/day)
    Thanks Received:
    2,090
    Location:
    Newport Beach, CA
    I just woke up my entire family because I fell out of my chair and knocked over my lamp at 3AM when I read that :laugh:
  21. marsey99

    marsey99

    Joined:
    Jul 18, 2007
    Messages:
    1,562 (0.60/day)
    Thanks Received:
    293
    ati claims nvidia is using dinosaur tech, love it.

    its the most powerful single gpu ever, of course ati will try and dull the shine on it.

    i recall all the ati fanbios claiming foul when nv did the 79gx2 but now its cool to do 2 gpu on 1 card to compete?

    wait till the 280gtx gets a die shrink and they slap 2 on 1 card, can you say 4870x4 needed to compete.
  22. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,436 (11.28/day)
    Thanks Received:
    13,620
    Location:
    Hyderabad, India
    Even if you do shrink the G200 to 55nm (and get a 4 sq.cm die), its power and thermal properties won't allow a X2. Too much power consumption (peak) compared to a G92(128SP, 600MHz) which allowed it. Watch how the GTX 280 uses a 6 + 8 pin input. How far do you think a die shrink would go to reduce it? Not to forget, there's something funny as to why NV isn't adapting newer memory standards (that are touted to be energy efficient). (1st guess: stick with GDDR3 to cut mfg costs since it takes $120 to make the GPU alone). Ceiling Cat knows what....but I don't understand what "meow" actually means...it means a lot of things :(
  23. tkpenalty New Member

    Joined:
    Sep 26, 2006
    Messages:
    6,958 (2.40/day)
    Thanks Received:
    345
    Location:
    Australia, Sydney
    In the end it DOES NOT MATTER how AMD achieves their performace.

    7950GX2 is an invalid claim as it could not function on every system, due to it being seen on a driver level as two cards; an SLi board was needed. You can't compare the 4870X2 to a 7950, its like comparing apples and oranges, 4870X2 to the system is only ONE card not two, CF is not enabled (therefore performance problems with multi GPUs go out the window). Moreover the way that the card uses memory is just the same as the C2Ds, two cores, shared L2.
  24. Megasty New Member

    Joined:
    Mar 18, 2008
    Messages:
    1,263 (0.54/day)
    Thanks Received:
    82
    Location:
    The Kingdom of Au
    The sheer size of the G200 won't allow for an GX2 or whatever. The heat that 2 of those things produce will burn each other out. Why in the hell would NV put 2 of them in a card when it costs an arm & a leg just to make one. The PP ratio for this this card is BS too when $400 worth of cards, whether it be the 9800GX2 or 2 4850s, are not only in the same league as the beast but allegedly beats it. The G200b won't be any different either. NV may be putting all their cash in this giant chip ATM but that doesn't mean that they're going to do anything stupid with it.

    If the 4870x2 & the 4850x2 are both faster than the GTX280 & costs a whole lot less then I don't see what the problem is except for people crying about the 2 GPU mess. As long as its fast & DON'T cost a bagillion bucks I'm game.
  25. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.68/day)
    Thanks Received:
    184
    I would like to know in which facts are you guys basing your claims that a die shrink won't do anything to help lowering heat output and power consumption? It has always helped A LOT. It is helping Ati and surely will help Nvidia. Thinking that the lower power consumption of RV670 and RV770 is based on architecture ehancements alone is naive. I'm talking about peak power, in comparison to what R600 was compared to competition, idle power WAS improved indeed, and so has GT200.

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page