• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce 4XX Series Discussion

Status
Not open for further replies.
Joined
Sep 7, 2008
Messages
337 (0.08/day)
Location
Canberra Australia
Processor i5 750 4.2ghz 1.4V
Motherboard Gigabyte P55-UD4P
Cooling Corsair H70
Memory 2X2G Gskill @2140 8-11-8-28-1T
Video Card(s) 2xGainward Phantom3 570@860/4200
Storage 2xWD 500G AAKS Raid0 + 2XWD 1T Green in Raid0
Display(s) 24" HP w2408h+23" HP w2338h
Case SilverStone Raven2E-B
Audio Device(s) onboard for now
Power Supply Antec 1000W Truepower quattro
Software Win7 Ultimate 64bit
Joined
Sep 25, 2007
Messages
5,825 (1.31/day)
Processor Core I7 3770K@4.8Ghz
Motherboard AsRock Z77 Extreme
Cooling Cooler Master Seidon 120M
Memory 12Gb G.Skill Sniper
Video Card(s) EVGA GTX 1070 FTW 2150/2240
Storage Sandisk SSD + 1TB Seagate Barracuda 7200
Display(s) IPS Asus 26inch
Case Antec 300
Audio Device(s) Xonar DG
Power Supply EVGA Supernova 650 G2
Software Windows 10/Windows 7
well look at it, . . . . is it impossible for a 512sp GTX3XX part to be faster than a HD5970, . . . . . well . . . . . no, but the problem with these new cards will be the drivers if nvidia can put out mature drivers early then they might just get the edge they need(probably not though) but if they can't then the performance could be dismal("looks at the 58XX drivers")

all we can do is wait.
 
Joined
Jul 2, 2008
Messages
3,635 (0.87/day)
Location
California
The only possibility that GTX380 is faster than HD5890 is because it couldn't scale really well, and HD5890 is not x2 faster than HD5870.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.72/day)
Location
Reaching your left retina.
last i checked tesla gpus were the SAME as there desktop brethren much like the quadro gpus are the same as there regular consumer parts all that tend to be different is the amount of ram avaible to the gpus clock speed and support for issues that arise and then again if Nvidia didnt have to cut down the gf 100 aka the 512 to 448 why are yields so low why is heat such and issue and why oh why is it taking so long to get good yields to release to the public ??? hmmm??? :roll: just messing with its true no one knows for sure but so far in my experience

Quadro Tesla and Geforce are all built off the same die with the different being ram and drivers example the ati 3850 can be flashed to a firegl card aka at the time $175 gpu flashed and perfroms exactly like a $2300 gpu ;) same goes for nvidia but they actually hardlock the gpus so you cant do that
Teslas are not the same cards, they use the same chip which is not the same. First of all, they have 6GB of memory instead of 1.5GB of the GeForce card, 24 GGDR5 modules instead of 12 on the GeForce cards (taken directly from the white paper). Considering that GDDR5 memory modules are rated at 2.5W TDP (white papers again, look around), that makes for a 12*2.5= 30 watts more on average consumption. Now the 448 SP Tesla is rated at 193W TDP - 30 = 165w. A similar Geforce would consume 163W, far far away from the now claimed 300w + problems to stay below, it just doesn't make any sense.

Those are official numbers and rightly so, the HPC market is very serious about those numbers and the white paper was the one sent to potential customers. Believe those numbers above anything else. The others about them having problems with heat etc. are rumors, just like the ones in another post above that said the card was 8% fster than HD5970 and maxed at 55C under load. I find it really hard to believe that a card with 448 SP and 6GB Vram will consume so much less than the Geforce card, even when GF is faster. I say this new rumors about power consumption are misleading and way out of context. Distorted while going from mouth to mouth. Probably they are about the GTX395 or GF104, because it makes sense, if Nvidia is certifying the cases, they are going to certify them for all their new cards, TT is not going to make new cases every 3 months right??

And no one knows if the GF variant has been cut down. They are saying that it HAS NOT being cut down. Tesla chips were cut down because of what I said above: 30w more from memory and the need to stay below 225w. In the desktop market they don't have those problems. If the card is as fast or close to the HD5970 it's natural to consume more than 225w and close to 300w and even then I think it will consume less than 275w.

well look at it, . . . . is it impossible for a 512sp GTX3XX part to be faster than a HD5970, . . . . . well . . . . . no, but the problem with these new cards will be the drivers if nvidia can put out mature drivers early then they might just get the edge they need(probably not though) but if they can't then the performance could be dismal("looks at the 58XX drivers")

all we can do is wait.
It's not. The fact of the matter is that the HD5870 should have been almost as fast as the HD5970 is and the latter should have been much faster, let's say 50% faster than that. In reality we have a HD5xxx series that is much slower than what their specs suggested. Now Fermi suggests more than 2x a GTX285, and there's NOTHING preventing it from achieving that, nothing.
 
Last edited:

bobzilla2009

New Member
Joined
Oct 7, 2009
Messages
455 (0.12/day)
System Name Bobzilla the second
Processor AMD Phenom II 940
Motherboard Asus M3A76-CM
Cooling 3*120mm case fans
Memory 4GB 1066GHz DDR2 Kingston HyperX
Video Card(s) Sapphire Radeon HD5870 1GB
Storage Seagate 7200RPM 500GB
Display(s) samsung T220HD
Case Guardian 921
Power Supply OCZ MODXSTREAM Pro 700w (2*25A 12v rail)
Software Windows 7 Beta
Benchmark Scores 19753 3dmark06 15826 3dmark vantage 38.4Fps crysis benchmarking tool (1680x1050, 4xAA)
Indeed, i think a next gen geforce card definitely could beat the hd5970, based on the apparent issues with diminishing returns in current games with the added shaders. (3200 only ~30% faster than 1600? something is seriously wrong there)

It could be possible that the hd5xxx series is very much geared to dx11 at the expense of current games though. I don't of know a single game, bar crysis, that pushes the gpu past 60% for the hd5870, which may be a huge factor in the scaling issue. Games that are Dx9/dx10 might just not know how to deal with the sheer number of shaders properly, or provide data in a way suited for such a huge number.For example Fallout 3 at maximum settings reports a gpu usage of around 10-30% with cpu at around 30-40%. So it will be interesting to see how ATi's and Nvidias cards cope in the dx11 releases over the next few months.
 
Last edited:
Joined
Sep 25, 2007
Messages
5,825 (1.31/day)
Processor Core I7 3770K@4.8Ghz
Motherboard AsRock Z77 Extreme
Cooling Cooler Master Seidon 120M
Memory 12Gb G.Skill Sniper
Video Card(s) EVGA GTX 1070 FTW 2150/2240
Storage Sandisk SSD + 1TB Seagate Barracuda 7200
Display(s) IPS Asus 26inch
Case Antec 300
Audio Device(s) Xonar DG
Power Supply EVGA Supernova 650 G2
Software Windows 10/Windows 7
It's not. The fact of the matter is that the HD5870 should have been almost as fast as the HD5970 is and the latter should have been much faster, let's say 50% faster than that. In reality we have a HD5xxx series that is much slower than what their specs suggested. Now Fermi suggests more than 2x a GTX285, and there's NOTHING preventing it from achieving that, nothing.
I'm not saying it is impossible and

The fact of the matter is that the HD5870 should have been almost as fast as the HD5970
????, can you explain that
 

DrPepper

The Doctor is in the house
Joined
Jan 16, 2008
Messages
7,482 (1.72/day)
Location
Scotland (It rains alot)
System Name Rusky
Processor Intel Core i7 D0 3.8Ghz
Motherboard Asus P6T
Cooling Thermaltake Dark Knight
Memory 12GB Patriot Viper's 1866mhz 9-9-9-24
Video Card(s) GTX470 1280MB
Storage OCZ Summit 60GB + Samsung 1TB + Samsung 2TB
Display(s) Sharp Aquos L32X20E 1920 x 1080
Case Silverstone Raven RV01
Power Supply Corsair 650 Watt
Software Windows 7 x64
Benchmark Scores 3DMark06 - 18064 http://img.techpowerup.org/090720/Capture002.jpg

bobzilla2009

New Member
Joined
Oct 7, 2009
Messages
455 (0.12/day)
System Name Bobzilla the second
Processor AMD Phenom II 940
Motherboard Asus M3A76-CM
Cooling 3*120mm case fans
Memory 4GB 1066GHz DDR2 Kingston HyperX
Video Card(s) Sapphire Radeon HD5870 1GB
Storage Seagate 7200RPM 500GB
Display(s) samsung T220HD
Case Guardian 921
Power Supply OCZ MODXSTREAM Pro 700w (2*25A 12v rail)
Software Windows 7 Beta
Benchmark Scores 19753 3dmark06 15826 3dmark vantage 38.4Fps crysis benchmarking tool (1680x1050, 4xAA)
it does indeed, and it has tarred the 5970 hugely, since it is down ~15% out of the gate without an oc to 850MHz. I do feel ATi should have kept the 5870 clockspeed, and sold off any clockspeed failures as 5960's (then have 5950's too).
 
Last edited:
Joined
Sep 25, 2007
Messages
5,825 (1.31/day)
Processor Core I7 3770K@4.8Ghz
Motherboard AsRock Z77 Extreme
Cooling Cooler Master Seidon 120M
Memory 12Gb G.Skill Sniper
Video Card(s) EVGA GTX 1070 FTW 2150/2240
Storage Sandisk SSD + 1TB Seagate Barracuda 7200
Display(s) IPS Asus 26inch
Case Antec 300
Audio Device(s) Xonar DG
Power Supply EVGA Supernova 650 G2
Software Windows 10/Windows 7
the clocks are a little lower(a good bit for the memory though) but even then the performance problems I see with them seem to be scaling problems at lower res.
 

SummerDays

New Member
Joined
Dec 8, 2009
Messages
276 (0.08/day)
All you have to do is waterblock a 5970 and you'll be able to overclock it like there's no tommorow.

The clocks were purposely set lower to keep the card within thermal limits for the "average" user.
 
Joined
Aug 9, 2006
Messages
1,007 (0.21/day)
System Name [Primary Workstation]
Processor Intel Core i7-920 Bloomfield @ 3.8GHz/4.55GHz [24-7/Bench]
Motherboard EVGA X58 E758-A1 [Tweaked right!]
Cooling Cooler Master V8 [stock fan + two 133CFM ULTRA KAZE fans]
Memory 12GB [Kingston HyperX]
Video Card(s) constantly upgrading/downgrading [prefer nVidia]
Storage constantly upgrading/downgrading [prefer Hitachi/Samsung]
Display(s) Triple LCD [40 inch primary + 32 & 28 inch auxiliary displays]
Case Cooler Master Cosmos 1000 [Mesh Mod, CFM Overload]
Audio Device(s) ASUS Xonar D1 + onboard Realtek ALC889A [Logitech Z-5300 Spk., Niko 650-HP 5.1 Hp., X-Bass Hp.]
Power Supply Corsair TX950W [aka Reactor]
Software This and that... [All software 100% legit and paid for, 0% pirated]
Benchmark Scores Ridiculously good scores!!!
Right, some of the bundled glasses require a special dongle that is proprietary to a certain make, or even model.

Some can be used with pretty much any cards, since it uses a universal dongle that connects to the VGA-output, and passes the video through to the CRT's VGA cable. As long as you have a DVI-VGA adapter, you could still use those glasses with older drivers (and G80 or older) plus WinXP. You just might get it to work with a 60Hz LCD panel, but the shutterglasses will be shutting out half of that, so you only see 30Hz per eye--such an awful flicker! I've tried it a few times before by mistake when a game defaulted to 60Hz at a strange resolution. Not a pretty sight!

All this talk of 3D glasses had me wondering.... I had to dig them up. I was wrong. The set I got with the Trek game is wired as well. Photo below:



I couldn't find the ASUS set, but I know I got it stashed somewhere. Anyways, I wouldn't be able to use those since I sold the ASUS card itself many years ago but kept the glasses. (long story...) I went through the manual quickly and it says a CRT screen is required, although as you said, it can be set up to work with an LCD but I assume it would be a painful experience.


As for the whole 5xxx vs. GT3xx discussion, l will say what I have said in other similar threads: AMD will not have me back as a customer in regards to their high-end GPUs until they get several issues taken care of.

The two primary reasons being:

1 - Terrible X-Fire/multi-GPU scaling issues. I was hoping this would get taken care of with the 5xxx series. However, as it stands now SLI offers much more in the way of consistent scaling across more titles. I've seen too many benchmarks in which a single 5850 is faster than two of them, no matter what the resolution. I doubt the issue is on the hardware level. Either way, this is something that AMD needs to work on.

2 - Build quality issues. Dropping several partners like Powercolor, TuL, and others would go a long way towards working this out. These particular partners as well as several others are famous for cards that have missing caps out of the box, cards that die that within hours of installation, and similar other issues. They tend to "tweak" AMD's own designs (better words would be "cut them down") until the end product is pretty shaky. I understand that AMD not being in the lead (and having serious financial difficulties) leaves them in a position where they can't be picky, but these partners are doing more damage to AMD's reputation than they can imagine. (nVidia did some housecleaning back in 2006, when they dropped several Asian and N. American partners, just for these exact reasons.) Plus AMD themselves have been cutting the corners on their overall designs. (This can be seen the in power regulation circuitry starting with the 3xxx series.) Yes, they are having financial issues, but this is something that needs to be taken care of.

There are several others, like driver issues along the lines of weird instabilities and inconsistencies, unnecessary glut and downright bugs, as well as a very slow driver release cycle.

I will still keep AMD solutions in mind when it comes to lower-end stuff like media boxes, laptops (when the choice is between AMD's GPU and Intel's), however for my high end gaming machines and workstations, nVidia is the only choice as I see it. (And before anyone gets started with "but my card" ... I have personally owned many, MANY, Radeon/AMD GPU's including some of the recent ones, so I speak from experience.)
 

Attachments

Joined
Sep 25, 2007
Messages
5,825 (1.31/day)
Processor Core I7 3770K@4.8Ghz
Motherboard AsRock Z77 Extreme
Cooling Cooler Master Seidon 120M
Memory 12Gb G.Skill Sniper
Video Card(s) EVGA GTX 1070 FTW 2150/2240
Storage Sandisk SSD + 1TB Seagate Barracuda 7200
Display(s) IPS Asus 26inch
Case Antec 300
Audio Device(s) Xonar DG
Power Supply EVGA Supernova 650 G2
Software Windows 10/Windows 7
thats already well known . . . . AMD does suck at making drivers and optimizing their own cards, . . . . . . some may say otherwise but thats what I belieive,

I remember I had a HD3870X2, . . . . . . not even words can express . . . . . .

I think its mainly because AMD dosen't have a equivalent of a TWIMTBP program, and some say nvidia screws AMD users with it but I praise nvidia for at least doing something.
 

bobzilla2009

New Member
Joined
Oct 7, 2009
Messages
455 (0.12/day)
System Name Bobzilla the second
Processor AMD Phenom II 940
Motherboard Asus M3A76-CM
Cooling 3*120mm case fans
Memory 4GB 1066GHz DDR2 Kingston HyperX
Video Card(s) Sapphire Radeon HD5870 1GB
Storage Seagate 7200RPM 500GB
Display(s) samsung T220HD
Case Guardian 921
Power Supply OCZ MODXSTREAM Pro 700w (2*25A 12v rail)
Software Windows 7 Beta
Benchmark Scores 19753 3dmark06 15826 3dmark vantage 38.4Fps crysis benchmarking tool (1680x1050, 4xAA)
Indeed, the board partners do always have a lot do with final quality in most cases. But you can't seem to buy much other the XFX, Sapphire, Gigabyte and Asus for ATi cards in the UK anyway :) and ATi do really need to optimise the hardware, or help devs to do so, my previous post pointing out that hardly any games use the hd5870's power beyond 60% illustrates that point nicely.
 

troyrae360

New Member
Joined
Feb 2, 2009
Messages
1,129 (0.29/day)
Location
Christchurch New Zealand
System Name My Computer!
Processor AMD 6400+ Black @ 3.5
Motherboard Gigabyte AM2+ GA-MA790X DS4
Cooling Gigabyte G-Power 2 pro
Memory 2x2 gig Adata 800
Video Card(s) HD3870x2 @ 900gpu and 999mem
Storage 2x wd raid edition 120gig + 1 samsung 320 + samsung 250
Display(s) Samsung 40inch series6 full HD 1080p
Case NZXT Lexa
Audio Device(s) ALC889A HD audio with Enables a Superior Audio Experience (on board)
Power Supply Vantec ION2+ 550w
Software Vista Home pream 64
:laugh: Looks like everyone is so bored of waiting for a DX11 Card from Nvidia that they have resorted to bashing ATI witch is the only companey with a DX11 solution and also has the best GPU on the market at the moment and has done for months now :D
 

TheMailMan78

Big Member
Joined
Jun 3, 2007
Messages
22,599 (4.94/day)
Location
'Merica. The Great SOUTH!
System Name TheMailbox 5.0 / The Mailbox 4.5
Processor RYZEN 1700X / Intel i7 2600k @ 4.2GHz
Motherboard Fatal1ty X370 Gaming K4 / Gigabyte Z77X-UP5 TH Intel LGA 1155
Cooling MasterLiquid PRO 280 / Scythe Katana 4
Memory ADATA RGB 16GB DDR4 2666 16-16-16-39 / G.SKILL Sniper Series 16GB DDR3 1866: 9-9-9-24
Video Card(s) MSI 1080 "Duke" with 8Gb of RAM. Boost Clock 1847 MHz / ASUS 780ti
Storage 256Gb M4 SSD / 128Gb Agelity 4 SSD , 500Gb WD (7200)
Display(s) LG 29" Class 21:9 UltraWide® IPS LED Monitor 2560 x 1080 / Dell 27"
Case Cooler Master MASTERBOX 5t / Cooler Master 922 HAF
Audio Device(s) Realtek ALC1220 Audio Codec / SupremeFX X-Fi with Bose Companion 2 speakers.
Power Supply Seasonic FOCUS Plus Series SSR-750PX 750W Platinum / SeaSonic X Series X650 Gold
Mouse SteelSeries Sensei (RAW) / Logitech G5
Keyboard Razer BlackWidow / Logitech (Unknown)
Software Windows 10 Pro (64-bit)
Benchmark Scores Benching is for bitches.
thats already well known . . . . AMD does suck at making drivers and optimizing their own cards, . . . . . . some may say otherwise but thats what I believe,

I remember I had a HD3870X2, . . . . . . not even words can express . . . . . .

I think its mainly because AMD dosen't have a equivalent of a TWIMTBP program, and some say nvidia screws AMD users with it but I praise nvidia for at least doing something.
ATI drivers are great man and as far as 3D glasses go Nvidia is late to the game.....



Indeed, the board partners do always have a lot do with final quality in most cases. But you can't seem to buy much other the XFX, Sapphire, Gigabyte and Asus for ATi cards in the UK anyway :) and ATi do really need to optimise the hardware, or help devs to do so, my previous post pointing out that hardly any games use the hd5870's power beyond 60% illustrates that point nicely.
It couldnt be the fact no game out there needs that power yet? Naaaaaaa. Anyway the fact it only uses 60% is a testament on how well they are optimized. I want an even lower number.
 
Joined
Feb 21, 2008
Messages
4,982 (1.16/day)
Location
Greensboro, NC, USA
System Name Cosmos F1000
Processor 7700K
Motherboard ASRock Z170 OC Formula
Cooling Corsair H100i, Panaflo's on case
Memory G.SKILL TridentZ 2x8GB DDR4 3400
Video Card(s) ASUS GTX 1080 ti
Storage 1TB 960 Pro, 2TB Samsung 850 Pro, 4TB WD Hard Drive
Display(s) ASUS ROG SWIFT PG278Q 27"
Case CM Cosmos 1000
Audio Device(s) Realtek® ALC1150 with logitech 5.1 system (midrange quality)
Power Supply CORSAIR HXi HX1000i 1000watt
Mouse G500 Logitech
Keyboard K65 RGB Corsair Tenkeyless Cherry Red MX
Software Win10 Pro, Win7 x64 Professional
ATI drivers are great man and as far as 3D glasses go Nvidia is late to the game.....

http://media.giantbomb.com/uploads/0/500/435061-ms_3_d_glasses_super.jpg



It couldnt be the fact no game out there needs that power yet? Naaaaaaa. Anyway the fact it only uses 60% is a testament on how well they are optimized. I want an even lower number.
That is actually driver issues. You should be using 100% of the GPU so frames never drop below your max display framerate "if the card is so far ahead of everything on the planet" :laugh:. You can blame it on the game, but the driver is at fault the vast majority of the time. I remember I had to wait a month after Bioshock's release to get drivers that would not crash every 5 minutes. It wasn't heat issues and there was no overclock. Tom's Hardware even talked about it in reviews of the 3 series ATi.

ATi still makes great cards, but they are far from having the driver support of Nvidia. With the exception of Physx with ATi cards involved which is another can of worms that nobody fully understands(despite the emotions involved with those who insist one way or the other).
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.72/day)
Location
Reaching your left retina.
????, can you explain that
Based on the specs that we know, pixel fillrate, texture fillrate, FLOP, etc. the HD5870 should be at least as fast as 2xHD4890, but it's far from that. The second part regarding the HD5970 is that it should be at least as fast as 2xHD5850 which it isn't either. For some reason the HD5xxx cards don't scale very well, although Crossfire does scale well, better than single cards in fact. So CPU bottleneck is out of the equation.

Fermi could suffer something similar, but chances of that happening are very vey low. The reality is that every Nvidia card in the past, at least since G80 has scaled according to it's specs (as my charts showed), sometimes being limited by it's FLOP capacity sometimes being limited by it's pixel fillrate. In any case both suggest it's going to be 2x a GTX285 and hence faster than HD5970. Not faster than what HD5970 should have been, but yes, faster than it actually is.



This is what I'm talking about, 2xHD4890 is not even that far from the HD5970...
 

Binge

Overclocking Surrealism
Joined
Sep 15, 2008
Messages
6,979 (1.70/day)
Location
PA, USA
System Name Molly
Processor i5 3570K
Motherboard Z77 ASRock
Cooling CooliT Eco
Memory 2x4GB Mushkin Redline Ridgebacks
Video Card(s) Gigabyte GTX 680
Case Coolermaster CM690 II Advanced
Power Supply Corsair HX-1000
The GF100s will definitely not be limited by ROPs that's for sure. If anyone is curious as to a serious problem with the 5XXX series, then they should buy themselves a 5850 and a 5870 and put them at the same clock speed. You'll notice they give exactly the same performance no matter the benchmark or utility. It's quite infuriating.
 

crazyeyesreaper

Chief Broken Rig
Staff member
Joined
Mar 25, 2009
Messages
9,371 (2.40/day)
Location
04578
well what do you expect most games dont even use the total shader amount on the 4870 / 4890 example dragon age using about 700 shaders or so out of 800 now imagine that same senario on a 5850 or 5870 if im only using say 1000 / 1440 shaders theres untapped potential there basically AMD / ATI needs to get there gpus better optimized so as to make full use of there potential

and nvidia's TWIMTBP program tend to make sure developers games are very well optimized for there hardware and that will benefit Fermi fairly well
 

Binge

Overclocking Surrealism
Joined
Sep 15, 2008
Messages
6,979 (1.70/day)
Location
PA, USA
System Name Molly
Processor i5 3570K
Motherboard Z77 ASRock
Cooling CooliT Eco
Memory 2x4GB Mushkin Redline Ridgebacks
Video Card(s) Gigabyte GTX 680
Case Coolermaster CM690 II Advanced
Power Supply Corsair HX-1000
Please, don't even start with that sort of a debate where the debate isn't necessary. There are tests and benchmarks that use the whole lot of resources the card can offer and both cards match each other +/- 1%.

This is not to say ATI is evil or NV is smarter... it's just how it is. The 5870 looked better in my case. It doesn't rattle like a 5850 can with higher fan speeds, but honestly the 5850 is the only card they released. There are 2 "higher performance" cards that are the same thing with different clock speeds, or a whole other GPU to back it up.

The point is that there happens to be one awesome card ATI has made (5850) and the rest are ROP starved. I doubt the NV solution will be ROP starved with 48 ROPs, and just about every other spec the card carries is more than the 5XXX series cards. This, to me, means that it may very well double performance of the 285. I highly HIGHLY count on it even. ATI's 4XXX series was a huge improvement, but there were scaling issues that were shown to be ROP related. NV's cards scaled just fine with how many shaders they had activated. The 295 scaled in relation to 280s in SLI. The headroom in spec gives them the ability to make cards that are actually higher performance than their own, more mainstream, cards.
 

TheMailMan78

Big Member
Joined
Jun 3, 2007
Messages
22,599 (4.94/day)
Location
'Merica. The Great SOUTH!
System Name TheMailbox 5.0 / The Mailbox 4.5
Processor RYZEN 1700X / Intel i7 2600k @ 4.2GHz
Motherboard Fatal1ty X370 Gaming K4 / Gigabyte Z77X-UP5 TH Intel LGA 1155
Cooling MasterLiquid PRO 280 / Scythe Katana 4
Memory ADATA RGB 16GB DDR4 2666 16-16-16-39 / G.SKILL Sniper Series 16GB DDR3 1866: 9-9-9-24
Video Card(s) MSI 1080 "Duke" with 8Gb of RAM. Boost Clock 1847 MHz / ASUS 780ti
Storage 256Gb M4 SSD / 128Gb Agelity 4 SSD , 500Gb WD (7200)
Display(s) LG 29" Class 21:9 UltraWide® IPS LED Monitor 2560 x 1080 / Dell 27"
Case Cooler Master MASTERBOX 5t / Cooler Master 922 HAF
Audio Device(s) Realtek ALC1220 Audio Codec / SupremeFX X-Fi with Bose Companion 2 speakers.
Power Supply Seasonic FOCUS Plus Series SSR-750PX 750W Platinum / SeaSonic X Series X650 Gold
Mouse SteelSeries Sensei (RAW) / Logitech G5
Keyboard Razer BlackWidow / Logitech (Unknown)
Software Windows 10 Pro (64-bit)
Benchmark Scores Benching is for bitches.
That is actually driver issues. You should be using 100% of the GPU so frames never drop below your max display framerate "if the card is so far ahead of everything on the planet" :laugh:. You can blame it on the game, but the driver is at fault the vast majority of the time. I remember I had to wait a month after Bioshock's release to get drivers that would not crash every 5 minutes. It wasn't heat issues and there was no overclock. Tom's Hardware even talked about it in reviews of the 3 series ATi.

ATi still makes great cards, but they are far from having the driver support of Nvidia. With the exception of Physx with ATi cards involved which is another can of worms that nobody fully understands(despite the emotions involved with those who insist one way or the other).
Well I'm glad I haven't had any of the issues your talking about.
 
Joined
May 4, 2009
Messages
1,940 (0.50/day)
Location
Singapore
System Name penguin
Processor i3-4160
Motherboard Asus H81 Mini-ITX
Cooling Stock
Memory 2x4GB Kingston 1600MHz
Video Card(s) Saphire Radeon 7850 2GB
Storage Plextor M5S 120GB+1TB Seagate
Display(s) 23' Dell
Case CM Elite 130
Audio Device(s) stock
Power Supply Corsair CX430m
Software W7/Lubuntu
Please, don't even start with that sort of a debate where the debate isn't necessary. There are tests and benchmarks that use the whole lot of resources the card can offer and both cards match each other +/- 1%.

This is not to say ATI is evil or NV is smarter... it's just how it is. The 5870 looked better in my case. It doesn't rattle like a 5850 can with higher fan speeds, but honestly the 5850 is the only card they released. There are 2 "higher performance" cards that are the same thing with different clock speeds, or a whole other GPU to back it up.

The point is that there happens to be one awesome card ATI has made (5850) and the rest are ROP starved. I doubt the NV solution will be ROP starved with 48 ROPs, and just about every other spec the card carries is more than the 5XXX series cards. This, to me, means that it may very well double performance of the 285. I highly HIGHLY count on it even. ATI's 4XXX series was a huge improvement, but there were scaling issues that were shown to be ROP related. NV's cards scaled just fine with how many shaders they had activated. The 295 scaled in relation to 280s in SLI. The headroom in spec gives them the ability to make cards that are actually higher performance than their own, more mainstream, cards.
Agreed. The SP/ROP ratio has been kept the same between the 4k and 5k generations and I remember a discussion here not long ago where it was proven that the 4830 scaled much better than its siblings due to its reduced number of SPs but identical ROP count.
Still I do thing that this could be negated with better drivers...
 

Binge

Overclocking Surrealism
Joined
Sep 15, 2008
Messages
6,979 (1.70/day)
Location
PA, USA
System Name Molly
Processor i5 3570K
Motherboard Z77 ASRock
Cooling CooliT Eco
Memory 2x4GB Mushkin Redline Ridgebacks
Video Card(s) Gigabyte GTX 680
Case Coolermaster CM690 II Advanced
Power Supply Corsair HX-1000
If you look up what a ROP is in hardware it explains why no amount of excess texture/shader operations will give additional performance.
 

bobzilla2009

New Member
Joined
Oct 7, 2009
Messages
455 (0.12/day)
System Name Bobzilla the second
Processor AMD Phenom II 940
Motherboard Asus M3A76-CM
Cooling 3*120mm case fans
Memory 4GB 1066GHz DDR2 Kingston HyperX
Video Card(s) Sapphire Radeon HD5870 1GB
Storage Seagate 7200RPM 500GB
Display(s) samsung T220HD
Case Guardian 921
Power Supply OCZ MODXSTREAM Pro 700w (2*25A 12v rail)
Software Windows 7 Beta
Benchmark Scores 19753 3dmark06 15826 3dmark vantage 38.4Fps crysis benchmarking tool (1680x1050, 4xAA)
But in a crossfire configuration, surely the amount of ROP's has doubled? unless only one set is used in such a configuration. Which would explain the difference.

What is confusing is to why the hd5870 isn't around twice the performance of the hd4890, since it has double the ROP's as well as shaders. If it was purely down to ROP limitations, surely a shader overclock would have little to no difference. However overclocking the hd5870 does result in very good performance boosts in relation to the overclock.
 

crazyeyesreaper

Chief Broken Rig
Staff member
Joined
Mar 25, 2009
Messages
9,371 (2.40/day)
Location
04578
well that would be because dbling specs dosent mean u will get dbl the performance
 
Status
Not open for further replies.
Top