Monday, September 14th 2009
First Radeon HD 5870 Performance Figures Surface
Here are some of the first performance figures of AMD's upcoming Radeon HD 5870 published by a media source. Czech Gamer posted performance numbers of the card compared to current heavyweights including Radeon HD 4870 X2, Radeon HD 4890, and GeForce GTX 285. Having not entered an NDA with AMD, the source was liberal with its performance projections citing AMD's internal testing that include the following, apart from the two graphs below:
Sources:
Czech Gamer, ChipHell
- Radeon HD 5870 is anywhere between 5~155 percent faster than GeForce GTX 285. That's a huge range, and leaves a lot of room for uncertainty.
- When compared to GeForce GTX 295, its performance ranges between -25 percent (25% slower) to 95 percent (almost 2x faster), another broad range.
- When two HD 5870 cards are set up in CrossFire, the resulting setup is -5 percent (5% slower) to 90 percent faster than GeForce GTX 295. Strangely, the range maximum is lesser than that on the single card.
- When three of these cards are setup in 3-way CrossFireX, the resulting setup is 10~160 percent faster than a GeForce GTX 295.
- The Radeon HD 5850 on the other hand, can be -25 percent (25% slower) to 120 percent faster than GeForce GTX 285.
265 Comments on First Radeon HD 5870 Performance Figures Surface
In fact DX11 tesselation is suposedly very different from the tesselation that Ati was doing on their own, outside of the DX API, so there's no advantage there, except maybe some more experience with it. And even then tesselation is not even remotely new, anyone with a grade in something related to graphics knows all the how-to very well, so I don't think there's going to be too much there. In the end it's all maths, a lot like calculating the averages of the position of two adjacent pixels.
2) In 2010 we will have many many titles supporting PhysX.
3) You don't pay extra so PhysX is Good.
4) In some games the difference is Extreme.
and did you know, that most of nvidia current lineup is just old renames? hehe :nutkick:
And they aren't exactly renames, they are for the most part have been refined versions of previous cards, not identical. The only true renames have been the 8800GT -> 9800GT(and even that was suposed to be a different card at first, but got reworked to just be a rename due to costs of retooling), and the 8800GS to 9600GSO.
And why do people bash Physx? You get it free if you have a nVidia card. What's to lose? I actually miss it. Loved the explosions in Graw2 with my 8800GT. They're just not as nice with this setup, but this setup does tear thru absolutely everything without Physx. lol.
I know the interpolation is being done inside the ATI shader cores, but still has a fixed function unit AFAIK, so nVidia may just forego all fixed function?
Like using the fixed function unit for a particular tesselation kernel, even tho interpolation is still being done inside the shader cores
i am sure they could still compete when they are reduced to $19.99
First of all you have to BUY a NVdia card :shadedshu
Rename it to UE3.exe and turn AA on in the CCC - whaddya know, it works.
Suddenly a backflip, its all good! ATI will fix it soon! And in all of those games, it doesnt affect gameplay one little bit. run in hardware or software mode and it has zero impact on the game - its no different to having a 'debris' 'breakable glass' or 'realistic' cloth tickbox - if those features didnt use physx, no one would care one bit about them! D3D 11 works on DX10 cards, just like how Nvidia cards can run HAWX but with the 10.1 feature greyed out. if your card supports stream/cuda, then you'll end up getting the other features of DX11 working (compute shaders) letting you use 'dx11 physics' and such - and since its coming to vista as well, they're getting a large amount of users with DX11 compatibility compared to when DX10 launched Unlikely. DX11's main improvements lie elsewhere than D3D11 (and no DX upgrade has ever given more FPS) see below i heard the same things. ATI's tesselation isnt whats been used for DX11, so its an even slate there. 5870 should have one that meets the standards. name one. as i said a few posts above, i've tested them - my housemates have nvidia cards, and i've done side by side comparisons. i'm yet to see ANYTHING even remotely approaching game play affecting, since the map pack for unreal 3 - which was a showcase when it first came out.
game devs ARE NOT making it a useful feature, so that they dont alienate users of laptops, old Nvidia cards, or ATI users. even if nvidia has the largest market share, there is a minimum requirement for Physx that most nvidia cards dont meet. many entry level, onboard, and laptop GPU's dont have the power for it - and thats where most of nvidias cards are.
And oh, PhysX is a dead technology. With DirectCompute driven physics acceleration available industry-wide, it will be foolhardy for developers of the DirectX 11 generation of games to opt for PhysX. Have fun playing those 20 odd present gen games.
hardwarebg.com/forum/showpost.php?p=2069589&postcount=1220
I've seen very realistic physics on games that don't need Nvidia cards.
And with processors now being multi core it taking up processing power is irrelevant, unless you decode video whilst you play video games.
- Suppose the GTX285 gives 60 FPS @ 8AF: then, with the 5870 being around 150% better, that would mean the 5870 gives around 90 FPS @ 8AF.
- Now, suppose the GTX285 gives 40 FPS @ 16AF: then, with the 5870 being almost 220% better, that would mean the 5870 gives around 87 FPS @ 16AF.
But none of these graphs tell us anything.
I laughed when I saw the HAWX one, like who cares? The game runs at 60fps anyways, what is the point of telling us that the 5870 runs it twenty more fps than the X2 when they're both near or over 100..
It's all just a waste of time.
These cards nor Nvidia's offering are going to spank anything. They'll fall down in the exact same places that current cards do. And they'll continue that way for quite some time.
First of all the power this new gpu's have is worthelles when we don't have games that can really use them , yes we can raise the resolution and probably an absurd 8xAA but this isn't great graphics , just more polished , some cool new effects and more detailed games would be great for all that power.
Second , even if you plan to buy a new graphic card , why waiting for what Nvidia will bring ? seems they are in the place of rumors and demo's , no pictures of actual card , no nothing.
You people think that news like this is released by some idiot who stoled some info from them ? of course not , it's released by themselves to build up hype , if Nvidia didn't released something until now then it's clear they have NOTHING , it could be some months before we see something from Nvidia and i bet you it will be expensive and not so good as everyone expected , as i read somewhere they want to battle with AMD/ATI in production costs and finally make some integrators happy ( so companys like XFX won't start to make evil red cards ).
So if you want a beast you can go ahead and buy one of these 40nm baby's, but my advice is to keep your money until some games needs a powerfull GPU , and of course for the Nvidia fans to see what the green camp releases but i don't expect a miracle, they battle with AMD now , not with punny ATI who was tiny compared to Nvidia.
Nvidia's reign is coming to an end faster than some expected , look at the prices AMD have for the new generation , if this were from Nvidia they would've been untouchable for most people , look how much faster they released the dx11 generation , so you can imagine this guys want Nvidia dead fast.
Also the 40nm process was a merit of AMD from the rumors around the web , Nvidia is kind of get in line and wait your turn , you didn't contributed to make 40nm a reality as much as AMD so don't expect to be treated as equal , i really don't see Nvidia as inovative these days , renaming products over and over again , being late with DX11 parts and acting like a child planing press demo's of future generation on same day as AMD releses their line , and who knows what crap they spread that i don't know about.
Conclusion , AMD should ask Nvidia let's see what you have now motherf..... , demo's ? then get the fuc.. out until you have some working parts losers.
Nice punctuation Leonard.