Friday, November 7th 2008

AMD to Give RV770 a Refresh, G200b Counterattack Planned

The RV770 graphics processor changed AMD's fortunes in the graphics processor industry and put it back in the race for supremacy over the larger rival NVIDIA. The introduction of RV770-based products had a huge impact on the mid-range and high-end graphics card markets, which took NVIDIA by surprise. Jen-Hsun Huang, the CEO of NVIDIA has been quoted saying that they had underestimated their competitor’s latest GPU, referring to RV770. While the Radeon HD 4870 graphics accelerator provided direct competition to the 192 shader-laden GeForce GTX 260, the subsequent introduction of a 216 shader variant saw it lose ground, leaving doubling of memory size to carve out the newer SKU, the Radeon HD 4870 1GB. Performance benchmarks of this card from all over the media have been mixed, but show that AMD isn’t giving up this chance for gaining technological supremacy.

In Q4 2008, NVIDIA is expected to release three new graphics cards: GeForce GTX 270 and GeForce GTX 290. The cards are based on NVIDIA’s G200 refresh, the G200b, which incorporates a new manufacturing technology to facilitate higher clock-speeds, stepping up performance. This looks to threaten the market position of AMD’s RV770, since it’s already established that G200 when overclocked to its stable limits, achieves more performance than RV770 pushed to its limits. This leaves AMD with some worries, since it cannot afford to lose the wonderful market-position its cash-cow, the RV770 is currently in, to an NVIDIA product that outperforms it by a significant margin, in its price-domain. The company’s next generation graphics processor would be the RV870, which still has some time left before it could be rushed in, since its introduction is tied to the constraints of foundry companies such as TSMC, and the availability of the required manufacturing process (40nm silicon lithography) by them. While TSMC takes its time working on that, there’s a fair bit of time left, for RV770 to face NVIDIA, which given the circumstances, looks a lost battle. Is AMD going to do something about its flagship GPU? Will AMD make an effort to maintain its competitiveness before the next round of the battle for technological supremacy begins? The answer is tilting in favour of yes.


AMD would be giving the RV770 a refresh, with the introduction of a new graphics processor, which could come out before RV870. This graphics processor is to be codenamed RV790 while the possible new SKU name is kept under the wraps for now. AMD would be looking to maintain the same exact manufacturing process of the RV770 and all its machinery, but it would be making changes to certain parts of the GPU that genuinely facilitate it to run at higher clock-speeds, unleashing the best efficiency level of all its 10 ALU clusters.

Déjà-vu? AMD has already attempted to achieve something similar, with its big plans on the Super-RV770 GPU, where the objective was the same: to achieve higher clock speeds, but the approach wasn’t right. All they did back then, was to put batches of RV770 through binning, pick the best performing parts, and use it on premium SKUs with improved cooling. The attempt evidently wasn’t very successful: no AMD partner was able to sell graphics cards that ran stable out of the box, in clock-speeds they set out to achieve: excess of 950 MHz.

This time around, the objective remains the same: to make the machinery of RV770 operate at very high clock-speeds, to bring out the best performance-efficiency of those 800 stream processors, but the approach would be different: to reengineer parts of the GPU to facilitate higher clock speeds. This aims to bring in a boost to the shader compute power (SCP) of the GPU, and push its performance. What gains are slated to be brought about? Significant and sufficient. Significant, with the increase of reference clock-speeds beyond those of what the current RV770 can reach with overclocking, and sufficient for making it competitive with G200b based products.

With this, AMD looks to keep its momentum as it puts up a great competition with NVIDIA, yielding great products from both camps, at great prices, all in all propelling the fastest growing segment in the PC hardware industry, graphics processors. This is going to be a Merry Xmas [shopping season] for graphics cards buyers.
Add your own comment

92 Comments on AMD to Give RV770 a Refresh, G200b Counterattack Planned

#1
Frederik S
Staff
Very nice article btarunr . Looking forward to seeing how they perform.
Posted on Reply
#2
W1zzard
hayder.master said:
thanx for this btarunr rally very interesting news and the first time i read about , that's good from ati look like reaction from amd on new nvidia gtx270 and gtx290 , im want take an ati card so im think wait for new one , i hope it come with 512bit to make gddr5 really useful
256 bit -> 512 bit does the same that gddr3 -> gddr5 does. double the memory bandwidth. apparently rv770 does not need that much bandwidth or you would see a much bigger difference between 4850 and 4870
Posted on Reply
#3
wolf
Performance Enthusiast
very cool article btarunr.... i can really see the potential in RV770 if they can clock the nuts off it :)
Posted on Reply
#4
FudFighter
W1zzard said:
256 bit -> 512 bit does the same that gddr3 -> gddr5 does. double the memory bandwidth. apparently rv770 does not need that much bandwidth or you would see a much bigger difference between 4850 and 4870
I would have pointed that out, but From his post I get the Impression that he wont listen/understand that.

its like trying to explain that moving to ddr3 for normal users is a dumb move, it costs more and offers lesser perf(for the cheaper stuff).

most people are better off getting more ram insted of "faster" ram.
Posted on Reply
#5
eidairaman1
The Exiled Airman
FudFighter, More Ram only helps with Monitors with Resolutions Larger than 1280x1024, otherwise if your playing that resolution 512 MB Ram is enough or even overkill.
Posted on Reply
#6
FudFighter
u miss understand, i ment system ram, alot of people think 1gb of ddr3(system ram) is better then 2 or even 4gb of ddr2(system ram) and will argue to the end about it........
Posted on Reply
#7
eidairaman1
The Exiled Airman
eventually bandwidth does overtake the latency drawback.
Posted on Reply
#8
FudFighter
but in the case of cheap ddr3 and 1gb(specly with vista) your still better off with cheap, quility ddr2 currently, IF you spend the money ddr3 for intel is a good move, but the cheap stuff at say 1333@9-9-9-xx is NOT going to give you a decent perf for avg user.

try yourself on vista, take 1gb of cheap ram, use vista for a while, then slap in a decent 2 or 4gb kit and watch the diffrance......the perf boost is DRASTIC even for desktop apps.

most joe sixpack type users would be better off with 2gb of cheap yet quility ddr2 then 1gb of cheap ass ddr3 or 4gb of quility ddr2 vs 2gb of cheap ass ddr3.

just a fact of how much memory apps and vista itself use up these days :)
Posted on Reply
#9
eidairaman1
The Exiled Airman
well obviously capacity has a play in Vista, and that is because Vista is more resource demanding than XP was, it seems Minimum Spec for Vista is like 768-1Gig of ram where Recommended is 2-4 Gigs. In General NT is resource demanding than the other Coding that MS has used for Windows.

Now for another Example, 1GB DDR2 vs 2GB DDR3, i say most will go with Capacity over speed due to fact of Vista Memory Demands. Beyond that When you want to Move up from 1 ram to another you have to Switch out motherboards (overall Cheaper than Having to swap CPUs) but aboveall lets get back on track with the Videocards Themselves, not System ram.
Posted on Reply
#10
Disruptor4
FudFighter, don't even bother. They don't seem to understand what you're getting at lol.
Posted on Reply
#11
FudFighter
yeah, thats what i was getting at Disruptor4 :)
Posted on Reply
#12
btarunr
Editor & Senior Moderator
FudFighter said:
but in the case of cheap ddr3 and 1gb(specly with vista) your still better off with cheap, quility ddr2 currently, IF you spend the money ddr3 for intel is a good move, but the cheap stuff at say 1333@9-9-9-xx is NOT going to give you a decent perf for avg user.
Few things you need to know:
  • GDDR3 ≠ DDR3

  • Sure DDR3 gives you higher frequencies, and at latencies that look bad from a DDR2/DDR1 perspective, (eg: 1333 @ say 9-9-9-21), but the fact that the frequency is high(er), theoretically the clock cycle is short (since there's more cycles per unit time), so latencies don't become as much of a problem there.
Posted on Reply
#13
eidairaman1
The Exiled Airman
i say its best to skip between ram generations, say go from DDR to DDR3/4 or even from DDR to FBDimm.
Posted on Reply
#14
FudFighter
yeah, i know btarunr, it was not about gddr or ddr2 or whatever, was more about people not understanding that just because something has a higher number dosnt make it better.

i should have used videocards as an example i guess since people dont get that i was talking about why its pointless to try and explain this stuff to some people.

re-explination

some people think a 9600gt is better then an 8800gt because, 9600 is higher then 8800, when in reality the 8800gt is hands down the better card.

that make it clearer what i ment?

gddr5 runs at FAR higher clocks then gddr3, so the clocks outbalance the buss bit width, so ati can make the pcb cheaper and less complex(less failed cards) where nvidia pcb's cost alot more to make driving up cost and due to extra complexity they have more pcb's that failed to meet spec or have flaws that endup causing problems down the line(like a card that fails after a few months due to a bad trace burning out)

sometimes cheaper is better!!!!
Posted on Reply
#15
Wile E
Power User
FudFighter said:
duno m8, i have seen some reviews that showed the performance of 3800 cards being quite notably better using apps that can use avivo such as powerdvd,windvd and the like where avivo can take load off the cpu running the video prosessing almost fully on the gpu.

I am still waiting for some mainstream apps/codecs to use nvidia and ati gpu's.
You missed my point. The UVD is what handles video decode on these cards. 2900 didn't have it. That is what gave the 3800 cards their improvement. Thus, it wasn't so much of an Avivo improvement, as it was them actually including the UVD this time. (In other words, it was a shot at ATI. ;) )
Posted on Reply
#16
btarunr
Editor & Senior Moderator
FudFighter said:
some people think a 9600gt is better then an 8800gt because, 9600 is higher then 8800, when in reality the 8800gt is hands down the better card.

that make it clearer what i ment?

gddr5 runs at FAR higher clocks then gddr3, so the clocks outbalance the buss bit width, so ati can make the pcb cheaper and less complex(less failed cards)
Higher freq? No..GDDR5 doesn't run at higher frequencies, but pushes ~2x data / unit time, and people choose to equate it to a high-frequency. The memory on a HD 4870 is 900 MHz (actual) while effectively 3600 MHz, whereas for GDDR3 to get there on the same bus width, it takes 1800 MHz (actual, something impossible), or 900 MHz (actual) on 2x the bus width.

FudFighter said:
where nvidia pcb's cost alot more to make driving up cost and due to extra complexity they have more pcb's that failed to meet spec or have flaws that endup causing problems down the line(like a card that fails after a few months due to a bad trace burning out)
not sure where you got that from :confused:
Posted on Reply
#17
erocker
Senior Moderator
btarunr said:
not sure where you got that from :confused:
I thought I heard about Nvidia's PCB's costing more right before the launch of the 2xxGTX series. Larger memory bus on the PCB cost more.
Posted on Reply
#18
Wile E
Power User
erocker said:
I thought I heard about Nvidia's PCB's costing more right before the launch of the 2xxGTX series. Larger memory bus on the PCB cost more.
It does. That's part of the reason the 3870 was so much cheaper than the 2900, the other part was the die shrink.
Posted on Reply
#19
btarunr
Editor & Senior Moderator
Right, and about the "burnout" part?
Posted on Reply
#20
Wile E
Power User
btarunr said:
Right, and about the "burnout" part?
Who knows? lol. While I can't say they'd be more prone to having a trace burn out, the odds of a bad pcb are higher.
Posted on Reply
#21
btarunr
Editor & Senior Moderator
Wile E said:
Who knows? lol. While I can't say they'd be more prone to having a trace burn out, the odds of a bad pcb are higher.
Ah probability..the odds favour a horse over a giraffe to fly :toast:
Posted on Reply
#23
FudFighter
btarunr said:
Ah probability..the odds favour a horse over a giraffe to fly :toast:
the more complext the pcb the more prone to flaws, just as the more complex the core/cpu the more prone to flaws, trace burnouts have happened due to flawed/damnaged internal traces, say the person laying the traces, twists/slitly tares the one thats being layed(or the machien does it) a flawd/damnaged trace could overheat and burn out, i have seen this in complex pcb's b4, its far more common then you may think, alot of cards that die under stress could easly be dieing from pcb errors not just flawed/bad caps/chips.

say normal trace is ============== thick and you endup with a trace thats like
this =======--====== wouldnt that small overly thin area be more likely to burn out then the trace thats layed properly?

now this can happen in any pcb, but the more complex something is the more chance somethings gonna be screwed up.

old adage "the simple plays the best plan" is true more times then not.
Posted on Reply
#24
DarkMatter
That kind of failures are not that common and the failure rate numbers are always tricky. Imagine you have two models A and B. A is much simpler than B and because of that B has a failure rate 5x bigger than A. Dissaster right? No necesarily, we lack a lot of info. It happens many times that the failure rate for A is smaller than 1% and then even when B has much higher failure rate it's still above a 95% of successful products. This scenario is the most common one and from an engineering point of view that 5% of failures is certainly a lot, they are obviously not doing well, and any engineer would say that, but that doesn't mean the product is going to be affected a lot, price wise and etc.
Posted on Reply
#25
FudFighter
duno over the years i have seen a good number of bad traces, and as things get smaller and more complex i wouldnt expect that to dissapear.

we use to fix flawed cards with burnt/damnaged/flawed serface traces with conductive pen then seal it with some clear fingernail polish :)
Posted on Reply
Add your own comment