1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ATI Believes GeForce GTX 200 Will be NVIDIA's Last Monolithic GPU.

Discussion in 'News' started by Polaris573, Jun 17, 2008.

  1. tkpenalty New Member

    Joined:
    Sep 26, 2006
    Messages:
    6,958 (2.43/day)
    Thanks Received:
    345
    Location:
    Australia, Sydney
    I agree with your view.

    GT200s, well Nvidia are shearing down their profits just to get these things to sell, AMD on the otherhand enjoy not having to reinforce their cards and put high end air cooling on-they are way better off. If these 4850s sell well, as well as the RV770, the GT200s look like an awful flop.
  2. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,240 (11.36/day)
    Thanks Received:
    13,584
    Location:
    Hyderabad, India
    A 65nm single GPU requires 6 + 8 pin power input (obviously for higher power input at peak). How much of that input can be reduced with a die shrink to 55nm? Enough to make a GX2? Without say three 6-pin connectors?
  3. Megasty New Member

    Joined:
    Mar 18, 2008
    Messages:
    1,263 (0.54/day)
    Thanks Received:
    82
    Location:
    The Kingdom of Au
    Of course it'll lower the heat & power. The only point was that it still wouldn't allow for a GX2, not to mention that the card would cost around $1200 :wtf: :shadedshu

    However, a faster & cheaper 400mm² die does have EVERY advantage over a slower, more costly 576mm² die.
  4. tigger

    tigger I'm the only one

    Joined:
    Mar 20, 2006
    Messages:
    10,183 (3.34/day)
    Thanks Received:
    1,399
    I bet there aint gonna be a gtx200 mobile chip:ohwell:
  5. tkpenalty New Member

    Joined:
    Sep 26, 2006
    Messages:
    6,958 (2.43/day)
    Thanks Received:
    345
    Location:
    Australia, Sydney
    90nm > 65nm (R600 to RV670), was a HUGE leap in the drop of the transistor size, moreover remember the RV670 has a massive chunk of it; half of the 512bit memory controller effectively removed.

    Die shrinks do something but under around 65nm the usefulness of die shrinking insn't really significant. Nvidia's CEO admitted that dieshrinking the GTX280 wouldnt help its extreme heat output a lot. Its fairly reasonable as to why, transistor count is more of a factor. In all cases, G80 > G92, R600 > RV670, its due to the cutting down of the memory controller.

    By the way, the reason why AMD's cards use more power is simple; their cards use more phases in contrast to Nvidia. More phases = more power used but phases subject to less current, as well as generating less heat.
  6. newconroer

    newconroer

    Joined:
    Jun 20, 2007
    Messages:
    2,889 (1.11/day)
    Thanks Received:
    272

    Ya because ATi's been looooooving the way things have turned out the last two and a half years.

    Yep, they don't have put 'high end air cooling' on their products, what a wonderful relief for them!

    ~
  7. Megasty New Member

    Joined:
    Mar 18, 2008
    Messages:
    1,263 (0.54/day)
    Thanks Received:
    82
    Location:
    The Kingdom of Au
    Not to be negative or anything but none of the stock cards from NV or ATi have high-end cooling fans. The stock casing only restrict most of the fans anyway :(
  8. Kreij

    Kreij Senior Monkey Moderator Staff Member

    Joined:
    Feb 6, 2007
    Messages:
    13,881 (5.08/day)
    Thanks Received:
    5,615
    Location:
    Cheeseland (Wisconsin, USA)

    Why are the wafers limited to 300mm? Can't they use a 600mm wafer and get four times the processors out of it?
    Is it just because all the FABs are set up to use that size or is there some kind of physical limit?
  9. [I.R.A]_FBi

    [I.R.A]_FBi New Member

    Joined:
    May 19, 2007
    Messages:
    7,664 (2.92/day)
    Thanks Received:
    540
    Location:
    c:\programs\kitteh.exe

    orly?
    im sure i read somewhere about teh gtx 280 cooler being designed by CM or sumpn.
  10. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.69/day)
    Thanks Received:
    184
    IMO a die shrink to 55nm could enable the posibility of doing a GX2. Maybe not a GTX280 GX2 but yes one with slightly lower clocks or one cluster dissabled and with enough performance to crunch the X2, of course it would require more power, but it would still be within the 6+8 pin envelop. I have three "facts", but of course are only based on my opinion:

    1- You have to take into account how power consumtion works. It's exponential, not linear, so a slower part would consume a lot less and the same can be applied to voltages. Nvidia because GT200 was worse than expected in this area had to lower the clocks, but probably they have kept it as high as possible within the selected power envelope. There's always a hot spot for performance-per-watt for any chip and GTX 280 is probably quite higher than that spot. FACT: look at Wizzard's Zotac AMP! GTX280 it consumes a lot more than what you should expect from that overclock. Aim a bit lower than that said spot and you have a "low power" chip. For example a GTX280 GX2 @ 500 Mhz would consume a lot less and still leave the HD4870 X2 behind in performance.

    2- Nvidia has implemented the abbility to shut down parts of the chip in the GT200, and it really works very well. Again look at Wizzard's power consumption charts and how it consumes a lot less than the X2 on average, even though its maximum is almost the same. That would make the card to probably never reach the maximum power consumtion in the GX2 card. There's no way you are going to be able to make a total of 64 ROPs work at the same time, for example.

    3- Continuing with the above argument, IMO if Nvidia did a GX2 it wouldn't be based on the GTX 280, but on the 8800 GS substitute. Nvidia will surely make a 16-20-24 ROP card while mantaining a high shader count (maybe 192/168 SP, same or one less cluster than GTX260 for example), they would be stupid if they didn't, as it makes more sense than ever. The GS is "weak" because it has 12 ROPs but 16 on the other hand are enough for high-def gaming. 16 ROP x 2 is more than enough as the X2/GX2 can testify, 32 x 2 is just over-over-overkill and silly.
    My bet is that Nvidia will do a 20 ROP 168/192 SP card for high mainstream no matter what and they could use that for the GX2. Final specs for that hypothetical GX2 would be: 40 ROP, 336/384 SP, 112/128 TMU and 2 x 320 bit memory controler, that if they can't make the card use the same memory pool as R700 seems to be going to do. The above card would leave the X2 well behind performance wise and still be within the power envelop IMO. Of course that envelop would be higher than that on the X2 but reachable IMHO and still within the 6+8 pin layout that's 300W, the GTX 280 needs 6+8 pins just by a hair.
  11. wolf

    wolf Performance Enthusiast

    Joined:
    May 7, 2007
    Messages:
    5,541 (2.10/day)
    Thanks Received:
    842
    interesting
  12. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.69/day)
    Thanks Received:
    184
    Exactly. Wafer size is to Fabs and manufacturers like CPU sockets are for motherboard and CPU makers (interms of compatibility) or ATX standard if you prefer. 450 mm wafers are on track already BTW, 600mm is too much to handle right now.

    EDIT: And yes, there is some physical limit too. Take in mind wafers are done by slicing silicon bars at very thin width (less than 1 mm IIRC) and have to mantain the same width all over their area. To that you have to add that the alloy of silicon has to be homogeneous throughout the whole wafer too.
    Last edited: Jun 18, 2008
  13. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,240 (11.36/day)
    Thanks Received:
    13,584
    Location:
    Hyderabad, India
    That's because of yields. Big wafer, wafer fails, loss of more yield. Keeping wafer sizes limited is a precautionary measure (while compromising on manufacturing expenditure).
  14. Nyte New Member

    Joined:
    Jan 11, 2005
    Messages:
    185 (0.05/day)
    Thanks Received:
    34
    Location:
    Toronto ON
    670/620/635 = 55 nm
    630/610 = 65 nm
    tkpenalty says thanks.
  15. tigger

    tigger I'm the only one

    Joined:
    Mar 20, 2006
    Messages:
    10,183 (3.34/day)
    Thanks Received:
    1,399
    Whats the average yield for a 300mm wafer then? Does it differ with differant manufacturers or is it totally dependant on the size of the wafer?
  16. HTC

    HTC

    Joined:
    Apr 1, 2008
    Messages:
    2,237 (0.97/day)
    Thanks Received:
    302
    It depends on the die size: the bigger the die size, the less units a wafer yields.

    That's why ATI is ahead of nVidia (in this respect, atm): they manage to make their die size much smaller then nVidia.
  17. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,240 (11.36/day)
    Thanks Received:
    13,584
    Location:
    Hyderabad, India
    Of course, articles from The Inquirer are so full of it, but in one such article, it was mentioned that on 300mm wafer, for the GT200 yields could be as low as 40%. Somewhere else is said that the die costs $110 to manufacture and assemble into the package (package as in electronics, not logistics) will send the cost upto $120. With increase in wafer sizes, you're increasing the risk of yield loss.
  18. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.69/day)
    Thanks Received:
    184
    Yeah that's what I wanted to say when I said physical limit, as there is no absolute physical limit for that.
    Also because bigger wafers are possible I said it works like standards. Nvidia will surely want bigger wafers for GT200 in expense of waffer yields, because probably the loss in those yields would be smaller than the gains in die yields, but since it's like an standard they can't. I don't know if I have explained that well.

    EDIT: Also I highly doubt those Inquirer yield numbers. Probably are on the high 40s and were told so, and they just slapped that 40% number. Also that number seems extremely low without knowing how high are other GPU yields. Probably are never higher than 75%, and much lower in new high-end chips. For example RV770. Difference from say 60% and 50% is already very high.
    Last edited: Jun 18, 2008
  19. spud107

    spud107

    Joined:
    Feb 12, 2007
    Messages:
    1,194 (0.44/day)
    Thanks Received:
    131
    Location:
    scotland
    so when nvidia see this they say o rly?
    next gfx card will be 2pcb's, one for the gpu, other for the rest of the components:D
    DanishDevil says thanks.
  20. DanishDevil

    DanishDevil

    Joined:
    Oct 6, 2005
    Messages:
    10,203 (3.17/day)
    Thanks Received:
    2,090
    Location:
    Newport Beach, CA
    ^ THAT was a good laugh. It wouldn't surprise me...
  21. tkpenalty New Member

    Joined:
    Sep 26, 2006
    Messages:
    6,958 (2.43/day)
    Thanks Received:
    345
    Location:
    Australia, Sydney
    Explain the logic behind that. You're only mfgr prices up by doing that, ever heard of multi layered PCBs man?...

    HD4850 > 9800GTX by 25% According to AMD, this is fairly believeable.

    A dual GTX280 is technically impossible, between two slots, why? 65nm to 55nm doesn't boast much of a change in TDP! Nvidia's CEO even admitted it, do I have to repeat this? GX2 would be viable, with say a GT200 variant that is similar to the G92 in die size. It was mentioned that a die shrink would only drop the GTX280's heat ouput down to what, around 200W, which is still ridiculously high (400W+ GX2). Who gives about Idle when the card is ridiculously hot at load.

    Nvidia really stabbed themselves in the foot, while it is powerful as such, the HD4870X2 will be a more successful product.
  22. spud107

    spud107

    Joined:
    Feb 12, 2007
    Messages:
    1,194 (0.44/day)
    Thanks Received:
    131
    Location:
    scotland
    the logic is "taking the piss", wheres your sense of humour?
  23. tkpenalty New Member

    Joined:
    Sep 26, 2006
    Messages:
    6,958 (2.43/day)
    Thanks Received:
    345
    Location:
    Australia, Sydney
    Oh, um it broke when my cousin dropped my guitar.

    This is serious.
  24. Megasty New Member

    Joined:
    Mar 18, 2008
    Messages:
    1,263 (0.54/day)
    Thanks Received:
    82
    Location:
    The Kingdom of Au
    The only thing that's serious about it is how NV bet the farm on this thing. I'll be collecting that farm when I buy my 4870x2 :p
  25. EastCoasthandle

    EastCoasthandle New Member

    Joined:
    Apr 21, 2005
    Messages:
    6,889 (2.04/day)
    Thanks Received:
    1,505
    [​IMG]
    [​IMG]
    gtx 200 series gpu (from what I've found so far)

    A wafer from the 4800 series gpu will offer a whole lot more. However, I haven't found one yet. Anyone have a 55nm wafer pic?
    Last edited: Jun 18, 2008

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page