• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Confirms GDDR5 for ATI Radeon 4 Series Video Cards

ya never . thats an interesting point u make . ati has always taken the initiative with technology.

another interesting point u make is that games on ati hardware catch up with nvidia after few months , u have any link to a benchmark ? i'm interested in seeing this.


I'll try and dig up the review I read a while back, if I can find it again - it's kinda hard to find legit reviews like that seeing as how not many sites run back through testing when new driver releases are put out . . .

we can kind of see it, though, in our e-peen "post your gameX benchmark score here" threads

but, again, I'll try and dig up what I remember seeing, and I'll post it back up here in this thread once I find it . . .
 
Water volume through tubing is what really matters, and 3/4" has lower laminar resistance than 7/16". However, the difference in volumetric throughput between the two is minimal enough that 7/16" is preferred due to simplicity in tubing runs.

Maybe because the 3/4" tubing has more surface area on the inter area of the tube which causes more friction on the coolant.....lol GDDR5 will be able to have smaller 256bit interface yet still have more bandwidth then a 512-bit 2900xt. I have no idea what this has to do with a garden hose tho.
 
hows that the case, the 2900XT and 3870 looked great on paper, but they came out and didnt stand a chance.

The x1800XT didnt have a fight agasint the 7800GTX 512 and the 1900XTX tied it and was again beaten a few weeks later by the 7900GTX and then the 7950GX2. When the 1950XTX showed up the 8800GTX arrived 1 month later. ATI hasnt been putting up a good showing for awhile. Heck look at the x1600 or hd2600 cards compared to there direct compititon

oh, dont make me link every farking review out there showing the 7950gx2 for the POS it was, your such an nvidiot..........

first the gx2 vs the 1950x2 the gx2 looses not just in perf, but in support, the gx2 is trash, nvidia made it to keep top numbers in a few games till the 8800 came out thats it, then they fully dumped its support, sure the drivers work, but quad sli? and even sli perf of the gx2 vs true sli was worse, thats sad since its basickly 2 cards talking dirrectly.

as to the x1900, it STOMPED the 7900/7950, cards that ON PAPER should have been stronger, 24 pipes vs 16 for example was what ppl where using to "proove" that the nvidia cards WOULD kill the x1900 range of cards.

i would make another massivly long post, but you would just ignore it like all fanboi's do, or resorte to insults.
 
Well think of it as this:
You have water cooling setup and want to decide on the size of the tubing. You can go 3/4" inner diameter tubing but you run the risk of a slower flow rate do to the size of the pump's barb only being 1/2" and it's power output (more/less). Or you can get a tube with an inner diameter 7/16" (which is slightly smaller then 1/2") which should maximize your flow rate. I believe this is what the following means:



If I am wrong could someone clarify this? :o

Imagine 512 connections/wires coming from the bus to everywhere it needs to go for the output. Thats alot of wires, and voltage control. With GDDR5, you have the ability to push the same or a lil more info faster than a 512 bus without all those wires, in this case, just 256. Also, GDDR5 "reads" the length of each connection, allowing for correct voltage thru the wire/line, this is important, so its more stable, keeping frequencies within proper thresholds, also elimanting costs of having to go the more exspensive way of doing it. Hope that helps
 
Imagine 512 connections/wires coming from the bus to everywhere it needs to go for the output. Thats alot of wires, and voltage control. With GDDR5, you have the ability to push the same or a lil more info faster than a 512 bus without all those wires, in this case, just 256. Also, GDDR5 "reads" the length of each connection, allowing for correct voltage thru the wire/line, this is important, so its more stable, keeping frequencies within proper thresholds, also elimanting costs of having to go the more exspensive way of doing it. Hope that helps

Thanks for the info :toast:
 
YW. This should dramatically cut down the costs of the pcbs, and still provide great performance
 
YW. This should dramatically cut down the costs of the pcbs, and still provide great performance

that is if the cost for the gddr5 doesnt cripple them... :(
 
YW. This should dramatically cut down the costs of the pcbs, and still provide great performance

Agreed...I still wonder what kind of performance is had with 512 bus. I hope we find out with the X2 :D
 
how long till the nvidia fanboi says that ati should have gone 512bit and should have more pipes/rops?

funny since the x1900/1950xt/xtx cards had 16 pipes/rops vs the 7900 having 24 and the 7900 got pwned........

meh, im sick of the "ati sucks because *add bullshit FUD here*" or the "nvidia sucks because *add bullshit FUD here*"

they both have their flaws and their good points.

the one thing i almost alwase see out of ati since the 8500 has been INNOVATION, it hasnt alwase worked out the way they intended, the 2900/3800 are the prime example, the main issue was that ati designed the r600/670 cores for dx10 not dx9, as such they followed what microsoft wanted to do with dx10+ that was to remove detocated AA hardware, using shaders to do the AA and other work, ofcorse this lead to a problem, dx9 support was an after thought and as such gave worse performance when you turned aa on.

ati thought like many other companys thought, vista would take off and be a huge hit, just like xp did when it came out, and with vista being a big hit, dx10+ games would have been out en-mass, but vista fell on its face, an ati still had this pure dx10 chip alreadin in the pipe, so they ran with it KNOWING it would have its issues/querks in dx9 games.

Nvidia on the other hand effectivly took the oposite aproch with the g80/92 cores, they build a dx9 part with dx10 support as an afterthought, in this case it was a good move, because without vista being a giant hit, game developers had no reasion to make true pure dx10 games.

nvidia didnt go dx10.1 because it would have taken some redesign work on the g92, and they wanted to keep their investment in it as low as possable to keep the profit margin as high as possable, its why they lowered the buss width and complexity of the pcb, its why they didnt add dx10.1 support, its why the 8800gt's refrance cooler is the utter peice of shit it is(i have one, i can say for 100% certen the refrance coolers a hunk of shit!!!!)

now i could go on and on and on about each company, point is they have both screwed up.

biggist screwups for each

ATI:2900(r600) not having a detocated AA unit for dx9 and older games.

nVidia: geforce 5/FX line, horrible dx9 support that game developers ended up having to not use because it ran so bad, forcing any FX owner to run all his/her games in dx8 mode, also the 5800 design was bad, high end ram with a small buss and ungodly loud fan does not a good card make.


thats how i see it, at least ATI never put out a card tauted as being capable of something that in practice it couldnt do even passably well......
 
that is if the cost for the gddr5 doesnt cripple them... :(

dought it will have any real impact from the card makers end, they buy HUGE quintites of chips, getting a price thats FAR lower then the preimum we consumers pay for that same ram.

I had an artical b4 my last hdd melt down, it showed acctual cost per memory chip for videocards, ddr vs ddr2 vs ddr3 vs ddr4

ddr4 was more expencive, but that was mostly due not to it being new but due to it being in short supply at the time, still the price you payed to get it on a card was extreamly exagerated, ofcorse its "new" so they charge extra for it.

the cost of 2 vs 3 again, wasnt that large, same with ddr vs ddr2, again, we are talking about companys that buy 100's of thousnads if not millions of memory chips at a time from their supplyers, those supplyers want to keep on the good side of their customers so they keep making a profit, so they give them far better prices then they would ever admit to an outside party.

also the more you buy, the lower the per unit cost is, same as with most things, go check supermediastore, if you guy 600 blanks the price is quite a bit lower then buying 50 or 100 at a time ;)
 
yea this is true!
the vendor to get ram at a nice price because they buy such large orders!
 
i would love to see somebody try that new qmoda...whatever ram thats higher dencity per chip, would be intresting to see a videocard that had 2gb of high bandwith ram......or hell use it for onboard video(ohhh that could rock, 4 chips for 512bit(or something like that) would make onboard a hell of alot better....
 
how long till the nvidia fanboi says that ati should have gone 512bit and should have more pipes/rops?

funny since the x1900/1950xt/xtx cards had 16 pipes/rops vs the 7900 having 24 and the 7900 got pwned........

meh, im sick of the "ati sucks because *add bullshit FUD here*" or the "nvidia sucks because *add bullshit FUD here*"

they both have their flaws and their good points.

the one thing i almost alwase see out of ati since the 8500 has been INNOVATION, it hasnt alwase worked out the way they intended, the 2900/3800 are the prime example, the main issue was that ati designed the r600/670 cores for dx10 not dx9, as such they followed what microsoft wanted to do with dx10+ that was to remove detocated AA hardware, using shaders to do the AA and other work, ofcorse this lead to a problem, dx9 support was an after thought and as such gave worse performance when you turned aa on.

ati thought like many other companys thought, vista would take off and be a huge hit, just like xp did when it came out, and with vista being a big hit, dx10+ games would have been out en-mass, but vista fell on its face, an ati still had this pure dx10 chip alreadin in the pipe, so they ran with it KNOWING it would have its issues/querks in dx9 games.

Nvidia on the other hand effectivly took the oposite aproch with the g80/92 cores, they build a dx9 part with dx10 support as an afterthought, in this case it was a good move, because without vista being a giant hit, game developers had no reasion to make true pure dx10 games.

nvidia didnt go dx10.1 because it would have taken some redesign work on the g92, and they wanted to keep their investment in it as low as possable to keep the profit margin as high as possable, its why they lowered the buss width and complexity of the pcb, its why they didnt add dx10.1 support, its why the 8800gt's refrance cooler is the utter peice of shit it is(i have one, i can say for 100% certen the refrance coolers a hunk of shit!!!!)

now i could go on and on and on about each company, point is they have both screwed up.

biggist screwups for each

ATI:2900(r600) not having a detocated AA unit for dx9 and older games.

nVidia: geforce 5/FX line, horrible dx9 support that game developers ended up having to not use because it ran so bad, forcing any FX owner to run all his/her games in dx8 mode, also the 5800 design was bad, high end ram with a small buss and ungodly loud fan does not a good card make.


thats how i see it, at least ATI never put out a card tauted as being capable of something that in practice it couldnt do even passably well......


The AA on the shaders is a stupid bad idea, MS doesnt even understand hardware thats the problem. Nvidia is not going to do DX10.1 because it requires shader based AA which is total junk and worthless. Sure the AA might look better but a 50% drop in FPS isnt worth it, ill take dedicated hardware AA any day. What MS needs to do is discuss these ideas not just sit around and think them up. Remember if MS cuts Nvidia out of DX totally OpenGL will make a massive comeback. MS has not choice but to do what Nvidia tells it to do for this reason alone. Sevral problems exist with shader AA if you can't see that im sorry. As for Innovation i beg to differ, what has ATI actully done, shader AA was the worst idea ive heard of. 5 groups of 64 shaders but only one unit can do complex shader math another bad idea. Thats why ATI cards preform like 64 shader cards most of the time, and if they are lucky 128 shader cards. GDDR5 is marketing hype, the latancy alone kills it, new ram types are never as good as the older ones on release. Look at the GDDR3 5700Ultra vs the regular 5700Ultra. Same preformace because of latancy. Go ahead give us 3000mhz ram with a 200ms reponse time, it wont be any better than 2000mhz ram with an 80ms reponse time, these are just random numbers but its the same reason people dont upgrade to DDR3. All hype from AMD and appsolutly nothing to even care about. This time they might have a single core solution that can tie the 8800Ultra.
 
i would love to see somebody try that new qmoda...whatever ram thats higher dencity per chip, would be intresting to see a videocard that had 2gb of high bandwith ram......or hell use it for onboard video(ohhh that could rock, 4 chips for 512bit(or something like that) would make onboard a hell of alot better....

hmmm 2gb of video ram would only be good for extremely high resolutions...
 
Why are there so many furious Nvidia fans in here?
 
where tring to save you from a stupid purchase
 
where tring to save you from a stupid purchase

come on now candle...what has ati ever done to you?

seriously there is now reason to hate ati that much!
 
oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point
 
oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point

look at my system specks<----
look at my face----> :D

im very happy with Amd/ati

my previous rig was nvidia i was happy with that as well
but hey im not complaining...you have every right to say what you want.
 
oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point

Most of us here are smart enough to know that the ATI cards we use are slower then nvidia's cards.
 
Most of us here are smart enough to know that the ATI cards we use are slower then nvidia's cards.

true...besides you dont see me going to a nvidia new thread and bash on them...

no one like a buzz kill!
 
oh yes there is, plus yall are like family and this is an intervention, i have to save yall from yourselves. IF you buy AMD products you will hate yourself for doing so, historiclly Nvidia has always been faster at the same price point

No one here needs saving, keep these types of comments to yourself. I believe you have already been warned on this subject before.
 
The AA on the shaders is a stupid bad idea, MS doesnt even understand hardware thats the problem. Nvidia is not going to do DX10.1 because it requires shader based AA which is total junk and worthless. Sure the AA might look better but a 50% drop in FPS isnt worth it, ill take dedicated hardware AA any day. What MS needs to do is discuss these ideas not just sit around and think them up. Remember if MS cuts Nvidia out of DX totally OpenGL will make a massive comeback. MS has not choice but to do what Nvidia tells it to do for this reason alone. Sevral problems exist with shader AA if you can't see that im sorry. As for Innovation i beg to differ, what has ATI actully done, shader AA was the worst idea ive heard of. 5 groups of 64 shaders but only one unit can do complex shader math another bad idea. Thats why ATI cards preform like 64 shader cards most of the time, and if they are lucky 128 shader cards. GDDR5 is marketing hype, the latancy alone kills it, new ram types are never as good as the older ones on release. Look at the GDDR3 5700Ultra vs the regular 5700Ultra. Same preformace because of latancy. Go ahead give us 3000mhz ram with a 200ms reponse time, it wont be any better than 2000mhz ram with an 80ms reponse time, these are just random numbers but its the same reason people dont upgrade to DDR3. All hype from AMD and appsolutly nothing to even care about. This time they might have a single core solution that can tie the 8800Ultra.

humm, maby u need to check the assassins creede reviews, seems shader based aa isnt a bad idea if done nativly by the game, the 9800gtx and 3870x2 where toe to toe less then 1fps diffrance between them, corse ur a fanboi, wouldnt expect you to know that.

as to ms doing what another company tells it, wrong, ms could block opengl support if they wanted, and guess what, nobody could stop them, everybody has to do what ms says, because the only other choice is to fall back into a niche market like matrox has done.

as to your 5700 example, that dosnt mean shit the 5700 was a peice of crap, it was the best of the fx line, but thats not saying much......specly when a 9550se can out perform it LOL
this is dx10.1 3870x2 vs 9800gtx under sp1(dx10.1 is enabled with sp1)
1209327136LbT4Ywf2ib_6_6_l.gif


But after we installed Vista SP1, an interesting thing happened. The performance of AMD's video card increased, while NVIDIA's performance did not. In fact, with SP1 installed there was less than a single frame per second difference on average between these two video cards.

funny, shader based aa vs detocated AA and the perf diffrance is around 1fps diffrance

so your "shader based AA is a stupid Idea" line is a load of fanboi bullshit(as expected from you)

the ideas fine, if your talking about native dx10/10.1 games, but todays games are mostly dx9 games with some dx10 shaders added(crysis for example)

as this shows there is ZERO reasion that shader based aa need to be any slower, its only slower in native code, its just slower on older games, hence as i said, they should have had a hardware AA unit for dx9 and older games and used shader based AA for dx10.x games, problem would have been solved.
 
You wont be seeing those huge latencies with this memory. I dont like to argue, just give it time, and see what happens. The 4870 is going to be a killer card, and rumors have it at 25% over the 9800GTX. nVidias solution, the G280 should be faster, but itll draw too much energy to do a X2, thus allowing the 4870 X2 to compete with it at the very top for single slot solution. Hopefully we will see that MCM on the 4870 X2
 
Back
Top