• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce 4XX Series Discussion

Status
Not open for further replies.

bobzilla2009

New Member
Joined
Oct 7, 2009
Messages
455 (0.12/day)
System Name Bobzilla the second
Processor AMD Phenom II 940
Motherboard Asus M3A76-CM
Cooling 3*120mm case fans
Memory 4GB 1066GHz DDR2 Kingston HyperX
Video Card(s) Sapphire Radeon HD5870 1GB
Storage Seagate 7200RPM 500GB
Display(s) samsung T220HD
Case Guardian 921
Power Supply OCZ MODXSTREAM Pro 700w (2*25A 12v rail)
Software Windows 7 Beta
Benchmark Scores 19753 3dmark06 15826 3dmark vantage 38.4Fps crysis benchmarking tool (1680x1050, 4xAA)
indeed, and looking around the hd5870 isn't too far off double the performance of the hd4890, around 70% faster apart from a few titles, which isn't bad at all imo. It just confuses me, if a card is horribly rop starved, surely overclocking it would have no real effect. Or is it like i alluded to before, that only one set of rops work in a cfx configuration? And tbh, the hd5870 is pretty much double (+/- 10%) the performance of the 4870 in nearly every game anyway, so i don't see why we're getting a sudden vibe of 'omg this card is awful' ^^. Maybe we're just frustrated that nvidia won't tell us anything about their new cards!
 
Last edited:

wolf

Performance Enthusiast
Joined
May 7, 2007
Messages
5,621 (1.21/day)
System Name MightyX (MITX)
Processor Ryzen 7 3700X PBO
Motherboard Gigabyte X570 I Aorus Pro WiFi
Cooling Noctua NH-L12
Memory 16GB DDR4 3600 CL15
Video Card(s) Gigabyte GTX1080 G1 OC/UV
Storage Samsung 970 Evo m.2 NVME
Display(s) AOC AGON AG352QCX
Case Raven RVZ-01
Power Supply Corsair SF600
Mouse Zowie EC1-A
Keyboard Razer Blackwidow X Chroma
Or is it like i alluded to before, that only one set of rops work in a cfx configuration?
No both cores work entirely, ROPS / shaders etc, even both sets of memory work as I understand it, its just the exact same data on both cards memory.

ATi have done well with the 5000 series there is no denying that, I just prefer a little more competition, ATi being on top is keeping their prices up, and keeping the fanboys gloating. neither is a good thing about either company and their respective followers.

Really I just want more DX11 titles, DIRT2 is bogus in DX11, 99% of what you see is there in DX9 with + ~30% better frame rates. the eye candy given from DX11 weighed against the performance hit just don't add up to me, bring on AVP
 

bobzilla2009

New Member
Joined
Oct 7, 2009
Messages
455 (0.12/day)
System Name Bobzilla the second
Processor AMD Phenom II 940
Motherboard Asus M3A76-CM
Cooling 3*120mm case fans
Memory 4GB 1066GHz DDR2 Kingston HyperX
Video Card(s) Sapphire Radeon HD5870 1GB
Storage Seagate 7200RPM 500GB
Display(s) samsung T220HD
Case Guardian 921
Power Supply OCZ MODXSTREAM Pro 700w (2*25A 12v rail)
Software Windows 7 Beta
Benchmark Scores 19753 3dmark06 15826 3dmark vantage 38.4Fps crysis benchmarking tool (1680x1050, 4xAA)
indeed, i feel that most expectations and complaints are just fanboyism in the end :) and i agree with the dirt 2 dx11, why did they use tessellation on the crowd for example? why not make scenery such as the trees, rocks and the cars themselves uber realistic? I do feel that codemasters made a huge fuss about not much. Although the lighting effects in the (only 2!!!) night races were very good. It would also seem that the performance hit was quite large too, maybe they just didn't optimise properly, because they didn't really need to in order to retain decent fps.

Tbh, i want to see AvP and nfbc2, but these both have dx9 modes too, so i'm not expecting absolutley amazing dx11 features. Still, i'm sure crysis 2 will likely utilise dx11 quite well if they haven't sold out to consoles and make us a crappy port.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.71/day)
Location
Reaching your left retina.
indeed, and looking around the hd5870 isn't too far off double the performance of the hd4890, around 70% faster apart from a few titles, which isn't bad at all imo. It just confuses me, if a card is horribly rop starved, surely overclocking it would have no real effect. Or is it like i alluded to before, that only one set of rops work in a cfx configuration? And tbh, the hd5870 is pretty much double (+/- 10%) the performance of the 4870 in nearly every game anyway, so i don't see why we're getting a sudden vibe of 'omg this card is awful' ^^. Maybe we're just frustrated that nvidia won't tell us anything about their new cards!
According to every review I have seen the HD5870 is 50% faster than HD4890 at max, with Wizzard's review showing just a mere 42% @1920x1200. It wouldn't really be a "problem" at all if it wasn't for the fact that 2xHD4890 in CF are 20% faster than that and it has the CF scaling inefficiency associated. HD5870 should be faster than 2xHD4890 by all means. The argument that "double specs doesn't mean double performance" fails badly in this case. It's true that doubling specs doesn't religiously double up performance, but it has always offered a 80%-90% increase and a single card is always faster than 2 cards with half the power. In any case the inefficiencies related to "doubling up specs" or "CPU limited" or "software/games limited" should result in same performance in both cases: HD5870 = 2xHD4890.

On topic.

All that would be off-topic, except that it can help us understand why Fermi can easily be faster than the 5970 and expecting it to be just 10% faster than HD5870 is kinda naive. Reasons are:

- HD5870 is 25% faster than GTX285. 3 billion transistors, 2.15x times the SPs + higher clocks resulting in 2.5x the raw power and it's only going to be 35% faster than the GTX285? Even the FX line was 60% faster, HD2900 more than that (in both cases it was the competition that more than doubled their performance, leaving those 2 in the dust)...

- HD5970 is 75% (1.75x) faster than GTX285. Same argument as above. This time because Fermi has 150% (2.5x) higher raw power, Fermi just needs to have a relative efficiency (on top of the innefficiency of GTX285) of 70% (0.7*2.5 = 1.75) in order to match the HD5970. Quite attainable considering that:

As I demostrated with a chart in previous post, every Nvidia card since the G92 has scaled almost to perfection as their specs grew. Expecting Nvidia to miss by more than 40% this time* is just naive and I'm trying not to be offensive by using that label instead of a different one.

* Especially when at the same time they have done so much to increase the efficiency of the chip: they have put 2 schedulers instead of one, setup engine with doubled output (relatively doubled, actually 4 times greater) and concurrent kernel execution, improved and increased caches and registers...
 
Last edited:

bobzilla2009

New Member
Joined
Oct 7, 2009
Messages
455 (0.12/day)
System Name Bobzilla the second
Processor AMD Phenom II 940
Motherboard Asus M3A76-CM
Cooling 3*120mm case fans
Memory 4GB 1066GHz DDR2 Kingston HyperX
Video Card(s) Sapphire Radeon HD5870 1GB
Storage Seagate 7200RPM 500GB
Display(s) samsung T220HD
Case Guardian 921
Power Supply OCZ MODXSTREAM Pro 700w (2*25A 12v rail)
Software Windows 7 Beta
Benchmark Scores 19753 3dmark06 15826 3dmark vantage 38.4Fps crysis benchmarking tool (1680x1050, 4xAA)
indeed, i very much hope nvidia do release a very powerful card (at least more powerful than the 5970), or else i think they may be in a bit of trouble in the market, since a lot of R&D cash would have went into the new architecture. It just seems unlike them not to be pushing it in everyones face that their card will be the best thing since sliced bread, and to not buy ATi.

Instead we get a few recorded videos at CES which don't indicate anything other than nvidia being shy. They could've and i think would've done a benchmark on unigine IF they were definitely beating the 5970, instead all they've shown is that 'we have a working card! go us!'. As it stands i'm not sure it is going to beat the 5970, based solely on how nvidia are acting. I'm hoping it will for competitions sake (and the future of gpu technology, since if it is the case that nvidia fail, there are some major scaling issues going on for both parties that need to be rectified) , since it's pretty certain ATi will already be going full steam ahead with the hd6xxx series research and first designs.
 
Last edited:

qubit

Overclocked quantum bit
Joined
Dec 6, 2007
Messages
15,925 (3.59/day)
Location
Quantum Well UK
System Name Quantumville™
Processor Intel Core i7-2700K at stock (hits 5 gees+ easily)
Motherboard Asus P8Z68-V PRO/GEN3
Cooling Noctua NH-D14
Memory 16GB (4 x 4GB Corsair Vengeance DDR3 PC3-12800 C9 1600MHz)
Video Card(s) Zotac GTX 1080 AMP! Extreme Edition
Storage Samsung 850 Pro 256GB | WD Green 4TB
Display(s) BenQ XL2720Z | Asus VG278HE (both 27", 144Hz, 3D Vision 2, 1080p)
Case Cooler Master HAF 922
Audio Device(s) Creative Sound Blaster X-Fi Fatal1ty PCIe
Power Supply Corsair HX 850W v1
Software Windows 10 Pro 64-bit
indeed, i very much hope nvidia do release a very powerful card (at least more powerful than the 5970), or else i think they may be in a bit of trouble in the market, since a lot of R&D cash would have went into the new architecture. It just seems unlike them not to be pushing it in everyones face that their card will be the best thing since sliced bread, and to not buy ATi.

Instead we get a few recorded videos at CES which don't indicate anything other than nvidia being shy. They could've and i think would've done a benchmark on unigine IF they were definitely beating the 5970, instead all they've shown is that 'we have a working card! go us!'. As it stands i'm not sure it is going to beat the 5970, based solely on how nvidia are acting. I'm hoping it will for competitions sake, since it's pretty certain ATi will already be going full steam ahead with the hd6xxx series research and first designs.
This agrees with my suspicion a little while back on this thread, that the new card could be another disappointing 2900 XT underperformer. I sure hope not. :ohwell:
 

bobzilla2009

New Member
Joined
Oct 7, 2009
Messages
455 (0.12/day)
System Name Bobzilla the second
Processor AMD Phenom II 940
Motherboard Asus M3A76-CM
Cooling 3*120mm case fans
Memory 4GB 1066GHz DDR2 Kingston HyperX
Video Card(s) Sapphire Radeon HD5870 1GB
Storage Seagate 7200RPM 500GB
Display(s) samsung T220HD
Case Guardian 921
Power Supply OCZ MODXSTREAM Pro 700w (2*25A 12v rail)
Software Windows 7 Beta
Benchmark Scores 19753 3dmark06 15826 3dmark vantage 38.4Fps crysis benchmarking tool (1680x1050, 4xAA)
although i suppose we are all a bit over expectant of the gf100, after all the 5970 is a dual card setup. It should be able to beat it, based on the horrible shader scaling, but if it slots in somewhere between the 5870 and the 5970, it won't have done badly, and will have still gained the single card performance crown. I just think that the delay will have skewed our expectations, if the card misses 5970 performance by 20% now, we'll dismiss it as a failure. But if it had done so 2-3 months ago, around when they SHOULD have come out, we would have applauded it's success.

This delay will only get worse for nvidia, especially if ATi have sorted out their 5 series shader scaling issues with the inevitable hd5890, and hit nvidia in march.
 
Joined
Nov 13, 2007
Messages
7,726 (1.74/day)
Location
Austin Texas
System Name _
Processor 8700K @ 5.1 Ghz
Motherboard MSI Z370-A PRO
Cooling 120mm Custom Liquid
Memory 32 GB 3600 Mhz DDR4 16-16-16-36-380 trfc - 2T
Video Card(s) Gigabyte GTX 2080 Ti Windforce (Undervolted OC 1905MHz)
Storage 3x1TB SSDs
Display(s) Alienware 34" 3440x1440 120hz, G-Sync
Case Jonsbo U4
Audio Device(s) Bose Solo
Power Supply Corsair SF750
Mouse silent click gaming mouse
Keyboard tenkeyless
Software Windows 10 64 Bit
yeah, true... we can do all the math and speculate all we want about the 'raw' performance numbers. But in all honsety nvidia is going for something new. They will use drivers, instead of hardware, to bring DX11 effects which in itself is nothing like the 285. So the card's performance isnt just some simple equation of shaders*frequency=performance.

Hardware wise it is impressive, but so was the 2900. All of the tech demos are not very impressive at all... And I agree - the fact that they CAN run the unigine demo benchmark but didn't is not a great sign. Its just been too quiet for too long on the green front.
 

crazyeyesreaper

Chief Broken Rig
Staff member
Joined
Mar 25, 2009
Messages
9,384 (2.37/day)
Location
04578
well who knows maybe those that love nvidia can hope and believe that Nvidia will pull an Apple launch :roll:
 

qubit

Overclocked quantum bit
Joined
Dec 6, 2007
Messages
15,925 (3.59/day)
Location
Quantum Well UK
System Name Quantumville™
Processor Intel Core i7-2700K at stock (hits 5 gees+ easily)
Motherboard Asus P8Z68-V PRO/GEN3
Cooling Noctua NH-D14
Memory 16GB (4 x 4GB Corsair Vengeance DDR3 PC3-12800 C9 1600MHz)
Video Card(s) Zotac GTX 1080 AMP! Extreme Edition
Storage Samsung 850 Pro 256GB | WD Green 4TB
Display(s) BenQ XL2720Z | Asus VG278HE (both 27", 144Hz, 3D Vision 2, 1080p)
Case Cooler Master HAF 922
Audio Device(s) Creative Sound Blaster X-Fi Fatal1ty PCIe
Power Supply Corsair HX 850W v1
Software Windows 10 Pro 64-bit
Fermi out in March everyone!

Nvidia’s highly anticipated next generation DirectX 11 child, codenamed Fermi has a big chance to ship in March. This is what several people have been telling us, but of course there are no any guarantees.
Is it just me, or is the excitement ebbing away? :rolleyes:

Everyone's favourite tech news source
 

bobzilla2009

New Member
Joined
Oct 7, 2009
Messages
455 (0.12/day)
System Name Bobzilla the second
Processor AMD Phenom II 940
Motherboard Asus M3A76-CM
Cooling 3*120mm case fans
Memory 4GB 1066GHz DDR2 Kingston HyperX
Video Card(s) Sapphire Radeon HD5870 1GB
Storage Seagate 7200RPM 500GB
Display(s) samsung T220HD
Case Guardian 921
Power Supply OCZ MODXSTREAM Pro 700w (2*25A 12v rail)
Software Windows 7 Beta
Benchmark Scores 19753 3dmark06 15826 3dmark vantage 38.4Fps crysis benchmarking tool (1680x1050, 4xAA)
it's disillusionment, we've only seen a launch delayed like this since the hd2900, it was exactly like this, with all the promising specs and the final product was awful in the end. We'll have to see in March (Still two months away! ATi will have shipped 3 million dx11 cards by then!) to see if it really is any good, and even then realistically it'll be August before any are reliably in stock.
 
Last edited:
Joined
May 4, 2009
Messages
1,940 (0.50/day)
Location
Singapore
System Name penguin
Processor i3-4160
Motherboard Asus H81 Mini-ITX
Cooling Stock
Memory 2x4GB Kingston 1600MHz
Video Card(s) Saphire Radeon 7850 2GB
Storage Plextor M5S 120GB+1TB Seagate
Display(s) 23' Dell
Case CM Elite 130
Audio Device(s) stock
Power Supply Corsair CX430m
Software W7/Lubuntu
According to every review I have seen the HD5870 is 50% faster than HD4890 at max, with Wizzard's review showing just a mere 42% @1920x1200. It wouldn't really be a "problem" at all if it wasn't for the fact that 2xHD4890 in CF are 20% faster than that and it has the CF scaling inefficiency associated. HD5870 should be faster than 2xHD4890 by all means. The argument that "double specs doesn't mean double performance" fails badly in this case. It's true that doubling specs doesn't religiously double up performance, but it has always offered a 80%-90% increase and a single card is always faster than 2 cards with half the power. In any case the inefficiencies related to "doubling up specs" or "CPU limited" or "software/games limited" should result in same performance in both cases: HD5870 = 2xHD4890.

On topic.

All that would be off-topic, except that it can help us understand why Fermi can easily be faster than the 5970 and expecting it to be just 10% faster than HD5870 is kinda naive. Reasons are:

- HD5870 is 25% faster than GTX285. 3 billion transistors, 2.15x times the SPs + higher clocks resulting in 2.5x the raw power and it's only going to be 35% faster than the GTX285? Even the FX line was 60% faster, HD2900 more than that (in both cases it was the competition that more than doubled their performance, leaving those 2 in the dust)...

- HD5970 is 75% (1.75x) faster than GTX285. Same argument as above. This time because Fermi has 150% (2.5x) higher raw power, Fermi just needs to have a relative efficiency (on top of the innefficiency of GTX285) of 70% (0.7*2.5 = 1.75) in order to match the HD5970. Quite attainable considering that:

As I demostrated with a chart in previous post, every Nvidia card since the G92 has scaled almost to perfection as their specs grew. Expecting Nvidia to miss by more than 40% this time* is just naive and I'm trying not to be offensive by using that label instead of a different one.

* Especially when at the same time they have done so much to increase the efficiency of the chip: they have put 2 schedulers instead of one, setup engine with doubled output (relatively doubled, actually 4 times greater) and concurrent kernel execution, improved and increased caches and registers...
Dude sorry but you're the one that acts like a total fanboy this time around! You're always complaining when somebody starts pulling numbers out of their ass and stating how we should always be unbiased and at the same time here you are propagating these false numbers.

And Here's why I think that - Crysis one of the more Nvidia oriented titles. The 5870 scores 50% more fps than the 285 and the 5970 is more than twice as fast...If you expect everyone to act grown up and not be brand agnostic, you should start first with yourself and lay a good example for others.

Based on the released specs I expect Fermi to be no faster than the 5970, meaning a bit more than 2xGTX285
 
Last edited:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.71/day)
Location
Reaching your left retina.
Dude sorry but you're the one that acts like a total fanboy this time around! You're always complaining when somebody starts pulling numbers out of their ass and stating how we should always be unbiased and at the same time here you are propagating these false numbers.
http://tpucdn.com/reviews/Powercolor/HD_5850_PCS_Plus/images/crysis_1920_1200.gif
And Here's why I think that - Crysis one of the more Nvidia oriented titles. The 5870 scores 50% more fps than the 285 and the 5970 is more than twice as fast...If you expect everyone to act grown up and not be brand agnostic, you should start first with yourself and lay a good example for others.

Based on the released specs I expect Fermi to be no faster than the 5970, meaning a bit more than 2xGTX285
Oh yeah, let's base your facts on a single game single resolution and call me biased... :laugh: What about the rest of the games? I'm basing my numbers on Wizzard's average, not nit picking the numbers that best suits my POV.

JUst 6 months ago, Crysis was a highly unoptimiced piece of crap and now it's the only game we should take into account, because yeah, now it's better than 6 months ago. Reason Ati wins. Please...

EDIT: Just to show you how skewed your POV is on that last one why not compare the cards based on this:



By far the most used engine.

Or maybe this one:



AFAIK the newest game in Wizzard's selection...

Did I used those numbers? I can assure that at least when it comes to number of games, what you see on UT3 is what you are going to see in many many games, would it be bad to use that to compare after all? Yet, I didn't use those numbers to compare.
 
Last edited:
Joined
May 4, 2009
Messages
1,940 (0.50/day)
Location
Singapore
System Name penguin
Processor i3-4160
Motherboard Asus H81 Mini-ITX
Cooling Stock
Memory 2x4GB Kingston 1600MHz
Video Card(s) Saphire Radeon 7850 2GB
Storage Plextor M5S 120GB+1TB Seagate
Display(s) 23' Dell
Case CM Elite 130
Audio Device(s) stock
Power Supply Corsair CX430m
Software W7/Lubuntu
Oh yeah, let's base your facts on a single game single resolution and call me biased... :laugh: What about the rest of the games? I'm basing my numbers on Wizzard's average, not nit picking the numbers that best suits my POV.

JUst 6 months ago, Crysis was a highly unoptimiced piece of crap and now it's the only game we should take into account, because yeah, now it's better than 6 months ago. Reason Ati wins. Please...
I'm not saying I'm 100% right either. I just took Crysis because it's a benchmark that most people hold in high regards. W1z's reviews are great and all but his game collection is not optimal for judging a card's average performance. You ask why? It's because half the games are over 4+ years old and are simply unoptimized for the shader monstrosities these cards are...Crysis on the other hand does them justice because it is a shader heavy game.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.71/day)
Location
Reaching your left retina.
Crysis on the other hand does them justice because it is a shader heavy game.
I agree to some degree, but see above. Also Fermi IS a shader monstrosity and if future performance is going to solely depend on that, Fermi will probably be not 2x but 3x times faster than a GTX285. Precisely in Shader computing Fermi destroys the GT200. Internal and independant (many demos shown in GDC) tests have shown Fermi having as much as 5x the shading capabilities compared to GT200.
 
Joined
May 4, 2009
Messages
1,940 (0.50/day)
Location
Singapore
System Name penguin
Processor i3-4160
Motherboard Asus H81 Mini-ITX
Cooling Stock
Memory 2x4GB Kingston 1600MHz
Video Card(s) Saphire Radeon 7850 2GB
Storage Plextor M5S 120GB+1TB Seagate
Display(s) 23' Dell
Case CM Elite 130
Audio Device(s) stock
Power Supply Corsair CX430m
Software W7/Lubuntu
I agree to some degree, but see above. Also Fermi IS a shader monstrosity and if future performance is going to solely depend on that, Fermi will probably be not 2x but 3x times faster than a GTX285. Precisely in Shader computing Fermi destroys the GT200.
Well let's hope you're right. The faster it is the greater the benefit to the consumer and please excuse me if I sound too bold but to the entire humanity as well because HPC is moving to paralel processors (exactly what GPUs are)
 

bobzilla2009

New Member
Joined
Oct 7, 2009
Messages
455 (0.12/day)
System Name Bobzilla the second
Processor AMD Phenom II 940
Motherboard Asus M3A76-CM
Cooling 3*120mm case fans
Memory 4GB 1066GHz DDR2 Kingston HyperX
Video Card(s) Sapphire Radeon HD5870 1GB
Storage Seagate 7200RPM 500GB
Display(s) samsung T220HD
Case Guardian 921
Power Supply OCZ MODXSTREAM Pro 700w (2*25A 12v rail)
Software Windows 7 Beta
Benchmark Scores 19753 3dmark06 15826 3dmark vantage 38.4Fps crysis benchmarking tool (1680x1050, 4xAA)
Indeed, but specs are specs in the end, at the moment i think nvidia need to pull a rabbit out of their ass based on how they are acting at the moment. And i agree that UT3 and HAWX are not really games to judge performance on, ut3 for example pulls a cool 20-40% gpu usage on my hd5870. Crysis on the other hand, does pull (around) 80-100% usage, and as such i trust that score more as an overall representation. In the end, i do own the hd5870, and based on all the information that i gain from owning it is where i get my information. Tables can only do so much and cannot depict the bigger picture, especially considering dx9 games.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.71/day)
Location
Reaching your left retina.
To further prove my point regarding the HD5870 and in the hope that it's undertood I'm not saying what I'm saying because of any bias:



As you can see, 2xHD4890 does much better than the HD5870 on Crysis too, again "closer" to the HD5970.

Crysis proves what optimization can do for a game and we have to thank AMD for improving their drivers for Crysis so much, even if took long. The HD4870 X2 is faster than the GTX295 on this game and resolution. It just proves AMD is faster on that game now.
 
Joined
May 4, 2009
Messages
1,940 (0.50/day)
Location
Singapore
System Name penguin
Processor i3-4160
Motherboard Asus H81 Mini-ITX
Cooling Stock
Memory 2x4GB Kingston 1600MHz
Video Card(s) Saphire Radeon 7850 2GB
Storage Plextor M5S 120GB+1TB Seagate
Display(s) 23' Dell
Case CM Elite 130
Audio Device(s) stock
Power Supply Corsair CX430m
Software W7/Lubuntu
Indeed, but specs are specs in the end, at the moment i think nvidia need to pull a rabbit out of their ass based on how they are acting at the moment. And i agree that UT3 and HAWX are not really games to judge performance on, ut3 for example pulls a cool 20-40% gpu usage on my hd5870. Crysis on the other hand, does pull (around) 80-100% usage, and as such i trust that score more as an overall representation. In the end, i do own the hd5870, and based on all the information that i gain from owning it is where i get my information. Tables can only do so much and cannot depict the bigger picture.
and UT3 and HAWX are still relatively new. What about games like Far Cry ONE and Fear. Sheesh I was still a teen when those came out :D
 
Joined
Jul 19, 2006
Messages
42,985 (8.71/day)
Processor i7 8700K
Motherboard Asus Maximus Hero X WiFi
Cooling Water
Memory 32GB G.Skill 3200Mhz CL14
Video Card(s) RX 5700 XT
Storage SSD's
Display(s) Nixeus EDG27
Case Lian Li PC 011 Dynamic
Audio Device(s) Yamaha AG03
Power Supply Corsair H1000i
Mouse PCMR Model O
Keyboard Razer BlackWidow Tournament Ed.
Software Windows 10 Enterprise
Reason Ati wins. Please...
Remember, through all the technobabble, Nvidia still doesn't have a new card out. ATi is currently "winning." Like it matters. :laugh:
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.71/day)
Location
Reaching your left retina.
Indeed, but specs are specs in the end, at the moment i think nvidia need to pull a rabbit out of their ass based on how they are acting at the moment. And i agree that UT3 and HAWX are not really games to judge performance on, ut3 for example pulls a cool 20-40% gpu usage on my hd5870. Crysis on the other hand, does pull (around) 80-100% usage, and as such i trust that score more as an overall representation. In the end, i do own the hd5870, and based on all the information that i gain from owning it is where i get my information. Tables can only do so much and cannot depict the bigger picture, especially considering dx9 games.
Shader usage is NOT a good measure either on the Radeon cards, less than actual performance if you ask me. Why? Because the AMD architecture is prone to innefficiency by design. It's based on 5 ALU wide shader processors that can only be accessed through a VLIW. In order to use all the shaders, those have to be accessed at the same time, when the DX call has been made. That's not always possible and it depends a lot on what and how game programers made their ebgine. And I wouldn't say Unreal Engine 3 is a crappy engine by it's results so...

Remember, through all the technobabble, Nvidia still doesn't have a new card out. ATi is currently "winning." Like it matters. :laugh:
Of course they are and I never said the contrary. We are discussing tech and tech has nothing to do with sales, delays and the fact that Nvidia doesn't want to disclose especific performance numbers. It's not related and that's why I'm posting about that fact.
 
Joined
May 4, 2009
Messages
1,940 (0.50/day)
Location
Singapore
System Name penguin
Processor i3-4160
Motherboard Asus H81 Mini-ITX
Cooling Stock
Memory 2x4GB Kingston 1600MHz
Video Card(s) Saphire Radeon 7850 2GB
Storage Plextor M5S 120GB+1TB Seagate
Display(s) 23' Dell
Case CM Elite 130
Audio Device(s) stock
Power Supply Corsair CX430m
Software W7/Lubuntu
Shader usage is NOT a good measure either on the Radeon cards, less than actual performance if you ask me. Why? Because the AMD architecture is prone to innefficiency by design. It's based on 5 ALU wide shader processors that can only be accessed through a VLIW. In order to use all the shaders, those have to be accessed at the same time, when the DX call has been made. That's not always possible and it depends a lot on what and how game programers made their ebgine. And I wouldn't say Unreal Engine 3 is a crappy engine by it's results so...
The UT3 engine is deff. not crappy but it ain't something special either :p After all you can run it on your iphone :D

http://toucharcade.com/2009/12/22/unreal-engine-3-running-on-3rd-gen-ipod-touch-iphone-3gs/
 

bobzilla2009

New Member
Joined
Oct 7, 2009
Messages
455 (0.12/day)
System Name Bobzilla the second
Processor AMD Phenom II 940
Motherboard Asus M3A76-CM
Cooling 3*120mm case fans
Memory 4GB 1066GHz DDR2 Kingston HyperX
Video Card(s) Sapphire Radeon HD5870 1GB
Storage Seagate 7200RPM 500GB
Display(s) samsung T220HD
Case Guardian 921
Power Supply OCZ MODXSTREAM Pro 700w (2*25A 12v rail)
Software Windows 7 Beta
Benchmark Scores 19753 3dmark06 15826 3dmark vantage 38.4Fps crysis benchmarking tool (1680x1050, 4xAA)
Indeed, the hd4890 now outperforms the gtx285 (albeit not by a significant margin), but in terms of a generation jump, the hd4870 to the 5870 is around +80%. I would imagine the 5890 would be about the same, if not moreso compared to the 4890.

And of course ATi are winning in this generation currently, just by default ^^ they have shipped 2 million units now, and probably will hit their 3rd million by the time march comes.

With regards to programming and dx calls, dx11 should improve such processes a lot (dx10 doesn't do a terrible job), meaning we won't see the true performance of the hd5870 in dx9 games, and that is what my lil' old card is telling me. Even good ol' benchmarks say so, why is it that my hd5870 only gets twice (and a little bit) the 3dmark06 fps than my old 4670, but over 5-6 times the fps in vantage? in the end its because no-one needs amazing dx9 performance anymore, all the next gen cards could play mw2 on power saving mode for example (i can still achieve 100fps @ 1080 w/ 4xAA with core clock down to 640MHz, the gpu usage just goes up accordingly).

Maybe it is this generation that the hardware is beginning to not be orientated towards dx9 performance anymore, leaving it at around the same kind of level and focusing on dx11 performance. After all, dx9 will have it's hardware limitations too.
 
Joined
May 4, 2009
Messages
1,940 (0.50/day)
Location
Singapore
System Name penguin
Processor i3-4160
Motherboard Asus H81 Mini-ITX
Cooling Stock
Memory 2x4GB Kingston 1600MHz
Video Card(s) Saphire Radeon 7850 2GB
Storage Plextor M5S 120GB+1TB Seagate
Display(s) 23' Dell
Case CM Elite 130
Audio Device(s) stock
Power Supply Corsair CX430m
Software W7/Lubuntu
Shader usage is NOT a good measure either on the Radeon cards, less than actual performance if you ask me. Why? Because the AMD architecture is prone to innefficiency by design. It's based on 5 ALU wide shader processors that can only be accessed through a VLIW. In order to use all the shaders, those have to be accessed at the same time, when the DX call has been made. That's not always possible and it depends a lot on what and how game programers made their ebgine. And I wouldn't say Unreal Engine 3 is a crappy engine by it's results so...



Of course they are and I never said the contrary. We are discussing tech and tech has nothing to do with sales, delays and the fact that Nvidia doesn't want to disclose especific performance numbers. It's not related and that's why I'm posting about that fact.
Please don't get me started on architecture! :) We can argue FOREVER what's more innefficient...For example how is the GTX285 more efficient than the HD4890 when for 50% more die real-estate(and that's excluding the nvio chip for video encoding) you get only 15-20% more performance in return? :p
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.71/day)
Location
Reaching your left retina.
Indeed, the hd4890 now outperforms the gtx285 (albeit not by a significant margin), but in terms of a generation jump, the hd4870 to the 5870 is around +80%. I would imagine the 5890 would be about the same, if not moreso compared to the 4890.

And of course ATi are winning in this generation currently, just by default ^^ they have shipped 2 million units now, and probably will hit their 3rd million by the time march comes.

With regards to programming and dx calls, dx11 should improve such processes a lot (dx10 doesn't do a terrible job), meaning we won't see the true performance of the hd5870 in dx9 games, and that is what my lil' old card is telling me. Even good ol' benchmarks say so, why is it that my hd5870 only gets twice the 3dmark06 fps than my old 4670, but over 5-6 times the fps in vantage? in the end its because no-one needs amazing dx9 performance anymore, all the next gen cards could play mw2 on power saving mode (i can still achieve 100fps with core clock down to 640MHz, the gpu usage just goes up accordingly).
By march they will have more than 5-6 millions, I'm sure about that. That without taking into account the unreleased lower end cards. Only reason they have sold so little (yes, it's little the HD3850 was reported to sell 200k in the first week...) was TSMC couldn't make enough chips. It's not coincidence that AMD is shipping much more cards now and that Nvidia is now ramping up 2 weeks sooner than they said in November.
 
Status
Not open for further replies.
Top