• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD to Give RV770 a Refresh, G200b Counterattack Planned

Not so much Avivo improvement, but the inclusion of an actual UVD, which 2900 doesn't have.

Also, don't forget about the 2900's poor AA performance.

duno m8, i have seen some reviews that showed the performance of 3800 cards being quite notably better using apps that can use avivo such as powerdvd,windvd and the like where avivo can take load off the cpu running the video prosessing almost fully on the gpu.

I am still waiting for some mainstream apps/codecs to use nvidia and ati gpu's.
 
thanx for this btarunr rally very interesting news and the first time i read about , that's good from ati look like reaction from amd on new nvidia gtx270 and gtx290 , im want take an ati card so im think wait for new one , i hope it come with 512bit to make gddr5 really useful

256 bit -> 512 bit does the same that gddr3 -> gddr5 does. double the memory bandwidth. apparently rv770 does not need that much bandwidth or you would see a much bigger difference between 4850 and 4870
 
very cool article btarunr.... i can really see the potential in RV770 if they can clock the nuts off it :)
 
256 bit -> 512 bit does the same that gddr3 -> gddr5 does. double the memory bandwidth. apparently rv770 does not need that much bandwidth or you would see a much bigger difference between 4850 and 4870

I would have pointed that out, but From his post I get the Impression that he wont listen/understand that.

its like trying to explain that moving to ddr3 for normal users is a dumb move, it costs more and offers lesser perf(for the cheaper stuff).

most people are better off getting more ram insted of "faster" ram.
 
Last edited by a moderator:
FudFighter, More Ram only helps with Monitors with Resolutions Larger than 1280x1024, otherwise if your playing that resolution 512 MB Ram is enough or even overkill.
 
u miss understand, i ment system ram, alot of people think 1gb of ddr3(system ram) is better then 2 or even 4gb of ddr2(system ram) and will argue to the end about it........
 
eventually bandwidth does overtake the latency drawback.
 
but in the case of cheap ddr3 and 1gb(specly with vista) your still better off with cheap, quility ddr2 currently, IF you spend the money ddr3 for intel is a good move, but the cheap stuff at say 1333@9-9-9-xx is NOT going to give you a decent perf for avg user.

try yourself on vista, take 1gb of cheap ram, use vista for a while, then slap in a decent 2 or 4gb kit and watch the diffrance......the perf boost is DRASTIC even for desktop apps.

most joe sixpack type users would be better off with 2gb of cheap yet quility ddr2 then 1gb of cheap ass ddr3 or 4gb of quility ddr2 vs 2gb of cheap ass ddr3.

just a fact of how much memory apps and vista itself use up these days :)
 
well obviously capacity has a play in Vista, and that is because Vista is more resource demanding than XP was, it seems Minimum Spec for Vista is like 768-1Gig of ram where Recommended is 2-4 Gigs. In General NT is resource demanding than the other Coding that MS has used for Windows.

Now for another Example, 1GB DDR2 vs 2GB DDR3, i say most will go with Capacity over speed due to fact of Vista Memory Demands. Beyond that When you want to Move up from 1 ram to another you have to Switch out motherboards (overall Cheaper than Having to swap CPUs) but aboveall lets get back on track with the Videocards Themselves, not System ram.
 
FudFighter, don't even bother. They don't seem to understand what you're getting at lol.
 
but in the case of cheap ddr3 and 1gb(specly with vista) your still better off with cheap, quility ddr2 currently, IF you spend the money ddr3 for intel is a good move, but the cheap stuff at say 1333@9-9-9-xx is NOT going to give you a decent perf for avg user.

Few things you need to know:
  • GDDR3 ≠ DDR3
  • Sure DDR3 gives you higher frequencies, and at latencies that look bad from a DDR2/DDR1 perspective, (eg: 1333 @ say 9-9-9-21), but the fact that the frequency is high(er), theoretically the clock cycle is short (since there's more cycles per unit time), so latencies don't become as much of a problem there.
 
i say its best to skip between ram generations, say go from DDR to DDR3/4 or even from DDR to FBDimm.
 
yeah, i know btarunr, it was not about gddr or ddr2 or whatever, was more about people not understanding that just because something has a higher number dosnt make it better.

i should have used videocards as an example i guess since people dont get that i was talking about why its pointless to try and explain this stuff to some people.

re-explination

some people think a 9600gt is better then an 8800gt because, 9600 is higher then 8800, when in reality the 8800gt is hands down the better card.

that make it clearer what i ment?

gddr5 runs at FAR higher clocks then gddr3, so the clocks outbalance the buss bit width, so ati can make the pcb cheaper and less complex(less failed cards) where nvidia pcb's cost alot more to make driving up cost and due to extra complexity they have more pcb's that failed to meet spec or have flaws that endup causing problems down the line(like a card that fails after a few months due to a bad trace burning out)

sometimes cheaper is better!!!!
 
duno m8, i have seen some reviews that showed the performance of 3800 cards being quite notably better using apps that can use avivo such as powerdvd,windvd and the like where avivo can take load off the cpu running the video prosessing almost fully on the gpu.

I am still waiting for some mainstream apps/codecs to use nvidia and ati gpu's.

You missed my point. The UVD is what handles video decode on these cards. 2900 didn't have it. That is what gave the 3800 cards their improvement. Thus, it wasn't so much of an Avivo improvement, as it was them actually including the UVD this time. (In other words, it was a shot at ATI. ;) )
 
some people think a 9600gt is better then an 8800gt because, 9600 is higher then 8800, when in reality the 8800gt is hands down the better card.

that make it clearer what i ment?

gddr5 runs at FAR higher clocks then gddr3, so the clocks outbalance the buss bit width, so ati can make the pcb cheaper and less complex(less failed cards)

Higher freq? No..GDDR5 doesn't run at higher frequencies, but pushes ~2x data / unit time, and people choose to equate it to a high-frequency. The memory on a HD 4870 is 900 MHz (actual) while effectively 3600 MHz, whereas for GDDR3 to get there on the same bus width, it takes 1800 MHz (actual, something impossible), or 900 MHz (actual) on 2x the bus width.

where nvidia pcb's cost alot more to make driving up cost and due to extra complexity they have more pcb's that failed to meet spec or have flaws that endup causing problems down the line(like a card that fails after a few months due to a bad trace burning out)

not sure where you got that from :confused:
 
not sure where you got that from :confused:

I thought I heard about Nvidia's PCB's costing more right before the launch of the 2xxGTX series. Larger memory bus on the PCB cost more.
 
I thought I heard about Nvidia's PCB's costing more right before the launch of the 2xxGTX series. Larger memory bus on the PCB cost more.

It does. That's part of the reason the 3870 was so much cheaper than the 2900, the other part was the die shrink.
 
Right, and about the "burnout" part?
 
Right, and about the "burnout" part?

Who knows? lol. While I can't say they'd be more prone to having a trace burn out, the odds of a bad pcb are higher.
 
Who knows? lol. While I can't say they'd be more prone to having a trace burn out, the odds of a bad pcb are higher.

Ah probability..the odds favour a horse over a giraffe to fly :toast:
 
Most likely pricing will be BS.
 
Ah probability..the odds favour a horse over a giraffe to fly :toast:

the more complext the pcb the more prone to flaws, just as the more complex the core/cpu the more prone to flaws, trace burnouts have happened due to flawed/damnaged internal traces, say the person laying the traces, twists/slitly tares the one thats being layed(or the machien does it) a flawd/damnaged trace could overheat and burn out, i have seen this in complex pcb's b4, its far more common then you may think, alot of cards that die under stress could easly be dieing from pcb errors not just flawed/bad caps/chips.

say normal trace is ============== thick and you endup with a trace thats like
this =======--====== wouldnt that small overly thin area be more likely to burn out then the trace thats layed properly?

now this can happen in any pcb, but the more complex something is the more chance somethings gonna be screwed up.

old adage "the simple plays the best plan" is true more times then not.
 
That kind of failures are not that common and the failure rate numbers are always tricky. Imagine you have two models A and B. A is much simpler than B and because of that B has a failure rate 5x bigger than A. Dissaster right? No necesarily, we lack a lot of info. It happens many times that the failure rate for A is smaller than 1% and then even when B has much higher failure rate it's still above a 95% of successful products. This scenario is the most common one and from an engineering point of view that 5% of failures is certainly a lot, they are obviously not doing well, and any engineer would say that, but that doesn't mean the product is going to be affected a lot, price wise and etc.
 
Back
Top