Monday, September 14th 2009

First Radeon HD 5870 Performance Figures Surface

Here are some of the first performance figures of AMD's upcoming Radeon HD 5870 published by a media source. Czech Gamer posted performance numbers of the card compared to current heavyweights including Radeon HD 4870 X2, Radeon HD 4890, and GeForce GTX 285. Having not entered an NDA with AMD, the source was liberal with its performance projections citing AMD's internal testing that include the following, apart from the two graphs below:
  • Radeon HD 5870 is anywhere between 5~155 percent faster than GeForce GTX 285. That's a huge range, and leaves a lot of room for uncertainty.
  • When compared to GeForce GTX 295, its performance ranges between -25 percent (25% slower) to 95 percent (almost 2x faster), another broad range.
  • When two HD 5870 cards are set up in CrossFire, the resulting setup is -5 percent (5% slower) to 90 percent faster than GeForce GTX 295. Strangely, the range maximum is lesser than that on the single card.
  • When three of these cards are setup in 3-way CrossFireX, the resulting setup is 10~160 percent faster than a GeForce GTX 295.
  • The Radeon HD 5850 on the other hand, can be -25 percent (25% slower) to 120 percent faster than GeForce GTX 285.
AMD reportedly used a set of 15 games to run its tests. Vague as they seem, the above numbers raise more questions than provide answers. The graphs below are clear, for a change.
Update: Here are allegedly AMD's own performance figures sourced from Chinese website ChipHell.com.
Sources: Czech Gamer, ChipHell
Add your own comment

265 Comments on First Radeon HD 5870 Performance Figures Surface

#126
jaydeejohn
Thing is, nVidia doesnt have a tesselator in their shrink, nor can anyone at this point consider what other costs die size wise DX11 will need, besides the tessellator
Posted on Reply
#127
Benetanegia
jaydeejohnThing is, nVidia doesnt have a tesselator in their shrink, nor can anyone at this point consider what other costs die size wise DX11 will need, besides the tessellator
AFAIK in DX11 tesselation is part of the Shader Model, so both brands should have tesselation inside the shaders, no need for a separate tesselator. I mean, I suppose that tesselation on the shaders is a requirement.

In fact DX11 tesselation is suposedly very different from the tesselation that Ati was doing on their own, outside of the DX API, so there's no advantage there, except maybe some more experience with it. And even then tesselation is not even remotely new, anyone with a grade in something related to graphics knows all the how-to very well, so I don't think there's going to be too much there. In the end it's all maths, a lot like calculating the averages of the position of two adjacent pixels.
Posted on Reply
#128
HellasVagabond
ImsochoboHi.

About this ageia tech.
Perform a blind test and see if users can tell their gameplay experience was better on an ati system or nvidia system(same fps numbers is required for this test)
Unless you say they should notice it, they rarely WILL.
This is fact, and i dont say, you see diffrence in some games, but rather that in most cases.
you wont.
Eyefinity might be just as much of a argument as ageia.
Never the less, there is a replacement for ageia, not for eyefinity(matrox which means extra cost !), physx is dead soon.

Cuda is not.
Yet.
Same applies for stream, nothing is going to be mainstream before BOTH have it, and i dont se any reason for anyone to really concider buying a nvidia card for something that is nvidia only and have to developed by game developers to support that technology.

Who would buy a damn car if only 10 roads were supported by the damn car.
1) There are around 20 titles that i am aware of that have PhysX support and i am sure there are more which i don't know.
2) In 2010 we will have many many titles supporting PhysX.
3) You don't pay extra so PhysX is Good.
4) In some games the difference is Extreme.
Posted on Reply
#129
PCpraiser100
Incredible. To the X2 owners, you just got pwned.
Posted on Reply
#130
wiak
HellasVagabondWhy am i not surprised that both benchmarks where done with AMD-Friendly games/applications ?
Had 3DMark Not disabled PhysX for NVIDIA cards this would be funny....

In any case wait for real reviews before judging any product, by NVIDIA or AMD.
lol PhysX enabled 3dmark vantage isnt even a valid score according to futuremark
and did you know, that most of nvidia current lineup is just old renames? hehe :nutkick:
Posted on Reply
#131
newtekie1
Semi-Retired Folder
wiaklol PhysX enabled 3dmark vantage isnt even a valid score according to futuremark
and did you know, that most of nvidia current lineup is just old renames? hehe :nutkick:
Yeah, pretty amazing that cards that old can still compete perofrmance wise with all but ATi's top 3 cards...;)

And they aren't exactly renames, they are for the most part have been refined versions of previous cards, not identical. The only true renames have been the 8800GT -> 9800GT(and even that was suposed to be a different card at first, but got reworked to just be a rename due to costs of retooling), and the 8800GS to 9600GSO.
Posted on Reply
#132
Wile E
Power User
Yeah, I hate the renames, but it doesn't mean the cards aren't capable.

And why do people bash Physx? You get it free if you have a nVidia card. What's to lose? I actually miss it. Loved the explosions in Graw2 with my 8800GT. They're just not as nice with this setup, but this setup does tear thru absolutely everything without Physx. lol.
Posted on Reply
#133
jaydeejohn
BenetanegiaAFAIK in DX11 tesselation is part of the Shader Model, so both brands should have tesselation inside the shaders, no need for a separate tesselator. I mean, I suppose that tesselation on the shaders is a requirement.

In fact DX11 tesselation is suposedly very different from the tesselation that Ati was doing on their own, outside of the DX API, so there's no advantage there, except maybe some more experience with it. And even then tesselation is not even remotely new, anyone with a grade in something related to graphics knows all the how-to very well, so I don't think there's going to be too much there. In the end it's all maths, a lot like calculating the averages of the position of two adjacent pixels.
So, youre saying nVidia is going to have no fixed function Tessellator at all, and itll all be done thru shaders?
I know the interpolation is being done inside the ATI shader cores, but still has a fixed function unit AFAIK, so nVidia may just forego all fixed function?
Like using the fixed function unit for a particular tesselation kernel, even tho interpolation is still being done inside the shader cores
Posted on Reply
#134
bangmal
newtekie1Yeah, pretty amazing that cards that old can still compete perofrmance wise with all but ATi's top 3 cards...;)

And they aren't exactly renames, they are for the most part have been refined versions of previous cards, not identical. The only true renames have been the 8800GT -> 9800GT(and even that was suposed to be a different card at first, but got reworked to just be a rename due to costs of retooling), and the 8800GS to 9600GSO.
hey boy, nvidia needed a 9800GTX+ and $100 price to compete against a 4850 :nutkick:

i am sure they could still compete when they are reduced to $19.99
Posted on Reply
#135
Scrizz
why do ppl keep saying you get PhysX free?
First of all you have to BUY a NVdia card :shadedshu
Posted on Reply
#136
Wile E
Power User
Scrizzwhy do ppl keep saying you get PhysX free?
First of all you have to BUY a NVdia card :shadedshu
Because if you already have an nVidia card, which most of the discrete gfx card owners of the world do, it is free. If you get an ATI card, you still had to buy it, but you don't get physx with it.
Posted on Reply
#137
Mussels
Freshwater Moderator
InTeL-iNsIdEits marketing BS and it is wrong for game devs to bend over for a quick buck from Nvidia when the games shouldnt run any worse on ATI hardware :slap:
Take batman arkham asylum "oh we made that engine so AA only works on nvidia"
Rename it to UE3.exe and turn AA on in the CCC - whaddya know, it works.
Suddenly a backflip, its all good! ATI will fix it soon!
HellasVagabondIn the past couple of months i have seen 5 games getting released that support Ageia ( Including Batman Arkham Asylum ) so on what do you base your assumption ? Ageia does NOT improve graphics, it makes them more Realistic so a flag is not exactly what i would call special in terms of rendering.
And in all of those games, it doesnt affect gameplay one little bit. run in hardware or software mode and it has zero impact on the game - its no different to having a 'debris' 'breakable glass' or 'realistic' cloth tickbox - if those features didnt use physx, no one would care one bit about them!
erockerYou are quite right. Many people feel burned/scorned by DX10. Here's to hoping DX11 doesn't turn into DX10. :toast:
D3D 11 works on DX10 cards, just like how Nvidia cards can run HAWX but with the 10.1 feature greyed out. if your card supports stream/cuda, then you'll end up getting the other features of DX11 working (compute shaders) letting you use 'dx11 physics' and such - and since its coming to vista as well, they're getting a large amount of users with DX11 compatibility compared to when DX10 launched
Scrizzif DX11 gives more fps I'm all for it
Unlikely. DX11's main improvements lie elsewhere than D3D11 (and no DX upgrade has ever given more FPS)
jaydeejohnThing is, nVidia doesnt have a tesselator in their shrink, nor can anyone at this point consider what other costs die size wise DX11 will need, besides the tessellator
see below
BenetanegiaAFAIK in DX11 tesselation is part of the Shader Model, so both brands should have tesselation inside the shaders, no need for a separate tesselator. I mean, I suppose that tesselation on the shaders is a requirement.

In fact DX11 tesselation is suposedly very different from the tesselation that Ati was doing on their own, outside of the DX API, so there's no advantage there, except maybe some more experience with it. And even then tesselation is not even remotely new, anyone with a grade in something related to graphics knows all the how-to very well, so I don't think there's going to be too much there. In the end it's all maths, a lot like calculating the averages of the position of two adjacent pixels.
i heard the same things. ATI's tesselation isnt whats been used for DX11, so its an even slate there. 5870 should have one that meets the standards.
HellasVagabond1) There are around 20 titles that i am aware of that have PhysX support and i am sure there are more which i don't know.
2) In 2010 we will have many many titles supporting PhysX.
3) You don't pay extra so PhysX is Good.
4) In some games the difference is Extreme.
name one. as i said a few posts above, i've tested them - my housemates have nvidia cards, and i've done side by side comparisons. i'm yet to see ANYTHING even remotely approaching game play affecting, since the map pack for unreal 3 - which was a showcase when it first came out.

game devs ARE NOT making it a useful feature, so that they dont alienate users of laptops, old Nvidia cards, or ATI users.
Wile EBecause if you already have an nVidia card, which most of the discrete gfx card owners of the world do, it is free. If you get an ATI card, you still had to buy it, but you don't get physx with it.
even if nvidia has the largest market share, there is a minimum requirement for Physx that most nvidia cards dont meet. many entry level, onboard, and laptop GPU's dont have the power for it - and thats where most of nvidias cards are.
Posted on Reply
#138
btarunr
Editor & Senior Moderator
BenetanegiaI don't know what to think about that. Rumors say GT300 will be 500 mm2 or bigger so that's far more than double compared to what a GT200 would be at 40nm. Also RV870 is 330 mm2 and has 2.1 billion transistors. By simple math GT300 would have around 3.2 billion transistors, more than double that of GT200. Also Ati doubled up everything, Nvidia doesn't need to do that, in theory. 32 ROPs is more than enough, Nvidia already had them. Already had 512 bit memory bus too, and if you look at the die shots of GT200, half the chip was dedicated to ROP/memory so, they could have 3x the shader/texturing power into a chip that is twice the size and even more. What they do in the end, that's another story.
I still don't think that will translate into "3x the shader/texturing power" compared to GT200, although I don't write it off completely. For MIMD to prove effective, you'll need app-specific optimizations that will almost not work with any other GPU. With AMD's rising market share in the GPU industry, I don't think developers will venture into that.

And oh, PhysX is a dead technology. With DirectCompute driven physics acceleration available industry-wide, it will be foolhardy for developers of the DirectX 11 generation of games to opt for PhysX. Have fun playing those 20 odd present gen games.
Posted on Reply
#139
ste2425
just looking at the first picture from that chinese site, an i saw prey, a game i currently have that my rig can max out, i also noticed that from 8 af to 16 af theres almost a double in performance?
Posted on Reply
#141
Hayder_Master
this one beat 4870X2 in some tests , they kill the old beast
Posted on Reply
#143
pantherx12
Just adding to the Physx discussion, I've always found the CPU good process physics fine enough.

I've seen very realistic physics on games that don't need Nvidia cards.

And with processors now being multi core it taking up processing power is irrelevant, unless you decode video whilst you play video games.
Posted on Reply
#144
HTC
ste2425just looking at the first picture from that chinese site, an i saw prey, a game i currently have that my rig can max out, i also noticed that from 8 af to 16 af theres almost a double in performance?
You seem to be miss-reading the chart, dude: those percentages are relative to the GTX285 performance.

- Suppose the GTX285 gives 60 FPS @ 8AF: then, with the 5870 being around 150% better, that would mean the 5870 gives around 90 FPS @ 8AF.

- Now, suppose the GTX285 gives 40 FPS @ 16AF: then, with the 5870 being almost 220% better, that would mean the 5870 gives around 87 FPS @ 16AF.
Posted on Reply
#145
newconroer
HellasVagabondWhy am i not surprised that both benchmarks where done with AMD-Friendly games/applications ?
Had 3DMark Not disabled PhysX for NVIDIA cards this would be funny....

In any case wait for real reviews before judging any product, by NVIDIA or AMD.
Funny you should mention that, cause if you notice, none of the graphs show games like Age of Conan, which was a major surprise for probably even AMD, especially given it's Nvidia support.

But none of these graphs tell us anything.
I laughed when I saw the HAWX one, like who cares? The game runs at 60fps anyways, what is the point of telling us that the 5870 runs it twenty more fps than the X2 when they're both near or over 100..


It's all just a waste of time.

These cards nor Nvidia's offering are going to spank anything. They'll fall down in the exact same places that current cards do. And they'll continue that way for quite some time.
Posted on Reply
#146
Mussels
Freshwater Moderator
newconroerFunny you should mention that, cause if you notice, none of the graphs show games like Age of Conan, which was a major surprise for probably even AMD, especially given it's Nvidia support.

But none of these graphs tell us anything.
I laughed when I saw the HAWX one, like who cares? The game runs at 60fps anyways, what is the point of telling us that the 5870 runs it twenty more fps than the X2 when they're both near or over 100..


It's all just a waste of time.

These cards nor Nvidia's offering are going to spank anything. They'll fall down in the exact same places that current cards do. And they'll continue that way for quite some time.
the reason for hawx is that its the only DX10.1 game on the list really.
Posted on Reply
#147
porculete
Anyone knows that the actual TDP of this card is 376W?!!;) This mean that u need a fridge inside your desk to keep it cool.
Posted on Reply
#148
leonard_222003
Why everyone is hung up on what Nvidia will release , seems Nvidia did brainwashed some people around here and made some huge fanboy base for them.
First of all the power this new gpu's have is worthelles when we don't have games that can really use them , yes we can raise the resolution and probably an absurd 8xAA but this isn't great graphics , just more polished , some cool new effects and more detailed games would be great for all that power.
Second , even if you plan to buy a new graphic card , why waiting for what Nvidia will bring ? seems they are in the place of rumors and demo's , no pictures of actual card , no nothing.
You people think that news like this is released by some idiot who stoled some info from them ? of course not , it's released by themselves to build up hype , if Nvidia didn't released something until now then it's clear they have NOTHING , it could be some months before we see something from Nvidia and i bet you it will be expensive and not so good as everyone expected , as i read somewhere they want to battle with AMD/ATI in production costs and finally make some integrators happy ( so companys like XFX won't start to make evil red cards ).
So if you want a beast you can go ahead and buy one of these 40nm baby's, but my advice is to keep your money until some games needs a powerfull GPU , and of course for the Nvidia fans to see what the green camp releases but i don't expect a miracle, they battle with AMD now , not with punny ATI who was tiny compared to Nvidia.
Nvidia's reign is coming to an end faster than some expected , look at the prices AMD have for the new generation , if this were from Nvidia they would've been untouchable for most people , look how much faster they released the dx11 generation , so you can imagine this guys want Nvidia dead fast.
Also the 40nm process was a merit of AMD from the rumors around the web , Nvidia is kind of get in line and wait your turn , you didn't contributed to make 40nm a reality as much as AMD so don't expect to be treated as equal , i really don't see Nvidia as inovative these days , renaming products over and over again , being late with DX11 parts and acting like a child planing press demo's of future generation on same day as AMD releses their line , and who knows what crap they spread that i don't know about.
Conclusion , AMD should ask Nvidia let's see what you have now motherf..... , demo's ? then get the fuc.. out until you have some working parts losers.
Posted on Reply
#150
Valdez
porculeteAnyone knows that the actual TDP of this card is 376W?!!;) This mean that u need a fridge inside your desk to keep it cool.
That's not true. The tdp's are 188w for the 5870 and 170w for the 5850.
Posted on Reply
Add your own comment
Apr 27th, 2024 01:04 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts