• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Official AMD Radeon 6000 Series Discussion Thread

AltecV1

New Member
Joined
Jan 1, 2009
Messages
1,286 (0.23/day)
Location
Republic of Estonia
Processor C2D E8400@3.6 ghz
Motherboard ASUS P5KPL-AM
Cooling Freezer 7 pro
Memory Kingston 4GB 800Mhz cl6
Video Card(s) Sapphire HD4850 700/1000 + Accelero S1 Rev. 2
Storage WD 250 GB AAKS
Display(s) 22" Samsung SyncMaster 226BW
Audio Device(s) int.
Power Supply Forton Blue Storm II 500W
Software Windows 7 64bit Ultimate
24 fps is not playable!!! 60+ is what is considered smooth and responsive frame rate
 
Joined
Jan 2, 2009
Messages
9,899 (1.77/day)
Location
Essex, England
System Name My pc
Processor Ryzen 5 3600
Motherboard Asus Rog b450-f
Cooling Cooler master 120mm aio
Memory 16gb ddr4 3200mhz
Video Card(s) MSI Ventus 3x 3070
Storage 2tb intel nvme and 2tb generic ssd
Display(s) Generic dell 1080p overclocked to 75hz
Case Phanteks enthoo
Power Supply 650w of borderline fire hazard
Mouse Some wierd Chinese vertical mouse
Keyboard Generic mechanical keyboard
Software Windows ten
24 fps is not playable!!! 60+ is what is considered smooth and responsive frame rate


tell that to millions of console gamers hapily playing at 30fps.
 
Joined
Dec 3, 2010
Messages
301 (0.06/day)
Location
star citizen
System Name spirit-crusher
Processor i7-4770k@4.5Ghz
Motherboard maximums hero vl
Cooling water 3 extreme
Memory 16GB KHX2400
Video Card(s) gtx780 N780OC
Storage WD CB 1tb FZEX
Display(s) SAMSUNG 2333SW lcd
Case CM storm trooper
Audio Device(s) on board
Power Supply Corsair tx 750W
Software win8.1 64bit pro
Benchmark Scores http://www.3dmark.com/3dm11/8568775
Anas, thats normal and nothing to worry about. Less fan speed is less noise and the temps are well within safe limits.

Games will never push the GPU as much as benchmarks so i wouldnt worry about it. If you want you can flash the bios to set your own fan profiles or use software like afterburner to set the fan how you want.

looks like i lost 6c after the new driver , it stands at 80c now after 10 min of stability test
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.61/day)
Because it's the same frame rate you get on your tv for the most part and at the cinema, for 90% of people not juttery = playable.

Actually anything sustained above 24 frames should be tv quality video and should play smoothly. We all would like to see higher but 30 frames is definately playable and should run smooth.

tell that to millions of console gamers hapily playing at 30fps.

Ok, I'll just answer with this...are you playing PC games on your TV? If so, then I can agree with you.




However... can you guarantee that that ~30FPS is a smoothly rendered framerate, or does the latency between rendered frames differ?


I can tell you, that the display output technology between television sets and monitors differs enough that a PC screen needs to have framrates @ it's 60hz refresh rate, however HD TV's support a 24.97hz refresh that smooths out those frames that monitors lack.


It's not always just about the numbers. You need to examine both how those numbers are generated, and what those numbers translate to further down the line. There is a distinct difference in display technology between the PC platform, and the console platforms, and it's not talked about very often, yet it such that I have real issues playing console games...quite a few console titles give me motion sickness due to lower framerates and other oddities...but the same titles on PC do not bother me at all... because the technology is different.

:toast:
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Hardocp doesn't use the same settings for each test card, it rates the card on playable frame rates and what their max playable settings are.

http://hardocp.com/images/articles/1292337625zR9jST7GBp_3_2.gif


5870 settings are lower
http://hardocp.com/images/articles/1292337625zR9jST7GBp_5_2.gif

You're cherry picking. That's one game and one setting and a extremely high one. How many people has a 30" LCD and of "all" of them how many uses 8xAA? 0.01%? At those settings the card with higher ammount of vram is just faster. back in the day when it was released the GTS250 creamed the HD4870 512 at 2560x1600, badly, just because it had 1 GB.

Hardocp reviews are full of fail and have always been. 30 fps is not playable and no one is going to sacrifice fps to play at such high resolutions and AA.
 

CDdude55

Crazy 4 TPU!!!
Joined
Jul 12, 2007
Messages
8,178 (1.33/day)
Location
Virginia
System Name CDdude's Rig!
Processor AMD Athlon II X4 620
Motherboard Gigabyte GA-990FXA-UD3
Cooling Corsair H70
Memory 8GB Corsair Vengence @1600mhz
Video Card(s) XFX HD 6970 2GB
Storage OCZ Agility 3 60GB SSD/WD Velociraptor 300GB
Display(s) ASUS VH232H 23" 1920x1080
Case Cooler Master CM690 (w/ side window)
Audio Device(s) Onboard (It sounds fine)
Power Supply Corsair 850TX
Software Windows 7 Home Premium 64bit SP1
I'll probably downgrade to a 6850 and grab a second one for crossfire in the future.

Edit: or a 6870.
 
Last edited:

Thatguy

New Member
Joined
Nov 24, 2010
Messages
666 (0.14/day)
I think they are having a ton of driver issues. the scores are really to erratic to make sense otherwise.I will reserve my judgement until they get at least 1 more update in. Seems like they are having issues with teh compiler. I bet we see them fracture the drivers for the 6800 and 6900 series cards.
 
Last edited:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
I think they are having a ton of driver issues. the scores are really to erratic to make sense otherwise.

Not really IMO. Drivers will improve performance, but probably not by much, they won't change the overall performance by much. VLIW5 to VLIW4 doesn't necessarily mean much better performance. Cayman having 20% more SIMDs than Cypress means that 20% improvement really is the most that it can aspire to achieve, but some games (or certain aspects of the engine) could actually use VLIW5 architecture to the fullest and in these cases Cayman == Cypress. Average out the occasions in which VLIW5 was not fully utilized to the ones where it was actually used and you'll probably end up with the numbers we are seing**.

Just like Cypress was innefficient with 20 SIMDs, Cayman is just as "inneficient", although it has 24 SIMDs. Hence efficiency is actually better, but not on the levels that Barts can achieve. That's as much as they can posibly do without a real new architecture. People confused the higher ALU utilizaion rate derived from the lower number of SIMDs in Barts with higher efficiency and that was not the case. Apparently there's just so many SPs that AMD's already aging architecture can handle. This is why SP numer remains the same, although the issue rate has been raised by 20% by going to VLIW4. I've been saying that since Barts released and no one believed me. Cayman could have fixed this, by duplicating the only thing that Cypress had not duplicated yet (triangle setup), but maybe some here in TPU remember that I pronosticated that that might not be the case.

My explanation was that even with one triangle setup engine the peak triangle output was way higher than what we see in games. Even with massive tesselation, the actual triangle rate never exceeds a 25% of the theoretical 850 billion triangles second that Juniper/Cypress can achieve. IMO doubling the setup engine would not increase performance except for doubling the availability. Availability can still be imprtant so you can see tangible improvements, but certainly not 2x (more like +10-20%). That of course is different with high tesselation, then of course a close to 2x increase is expected.

I don't know if I was/am right, only time will tell, but Cayman benchmarks sure look like backing up my thoughts for the time being.

** Back in the days of HD3000 several people at beyiond3D and gamedev.net reported through actual testing that actual average ALU usage in games was between 3.5 and 4.5 out of 5, which means that 4 does work out as a better number than 5 (its the average). Unfortunately in those cases where games used less than 3.5 VLIW 4 is not that much better than VLIW 5, and in those occasions in which it was 4.5 VLIW5 was actually faster. In general VLIW4 > VLIW5 but not by much and not always.
 
Last edited:

finndrummer

New Member
Joined
Dec 30, 2009
Messages
186 (0.04/day)
System Name i5-750
Processor i5 750
Motherboard ASUS P7P55D Deluxe
Cooling Arctic Silver Pro Freezer 2
Memory 2 x Corsair 2GB XMS3 DDR3-1333Mhz
Video Card(s) EVGA GTX 570
Storage SAMSUNG F3 HD103SJ 1 To
Display(s) Samsung Syncmaster PX2370 23" LED Full HD
Case Antec Nine hundred Modded
Power Supply Corsair HX750
Software Windows 7 Ultimate 64x
So Nvidia won this chapter, and i think they can also release a dual-gpu card to answer the 6990, they come from very far and won the battle, is see the GTX 570 the best performance/Watt and performance/dollar card. it's really surprising. But, why are you guys disapointed ? isn't the 6900 series a simple refresh of the previous generation ?!! You can't expect much more.
 
Joined
Jul 19, 2006
Messages
43,587 (6.72/day)
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS TUF x670e
Cooling EK AIO 360. Phantek T30 fans.
Memory 32GB G.Skill 6000Mhz
Video Card(s) Asus RTX 4090
Storage WD m.2
Display(s) LG C2 Evo OLED 42"
Case Lian Li PC 011 Dynamic Evo
Audio Device(s) Topping E70 DAC, SMSL SP200 Headphone Amp.
Power Supply FSP Hydro Ti PRO 1000W
Mouse Razer Basilisk V3 Pro
Keyboard Tester84
Software Windows 11
So Nvidia won this chapter, and i think they can also release a dual-gpu card to answer the 6990, they come from very far and won the battle, is see the GTX 570 the best performance/Watt and performance/dollar card. it's really surprising. But, why are you guys disapointed ? isn't the 6900 series a simple refresh of the previous generation ?!! You can't expect much more.

Won what? 6900 series is not a refresh.
 

finndrummer

New Member
Joined
Dec 30, 2009
Messages
186 (0.04/day)
System Name i5-750
Processor i5 750
Motherboard ASUS P7P55D Deluxe
Cooling Arctic Silver Pro Freezer 2
Memory 2 x Corsair 2GB XMS3 DDR3-1333Mhz
Video Card(s) EVGA GTX 570
Storage SAMSUNG F3 HD103SJ 1 To
Display(s) Samsung Syncmaster PX2370 23" LED Full HD
Case Antec Nine hundred Modded
Power Supply Corsair HX750
Software Windows 7 Ultimate 64x
Joined
Sep 25, 2007
Messages
5,965 (0.98/day)
Location
New York
Processor AMD Ryzen 9 5950x, Ryzen 9 5980HX
Motherboard MSI X570 Tomahawk
Cooling Be Quiet Dark Rock Pro 4(With Noctua Fans)
Memory 32Gb Crucial 3600 Ballistix
Video Card(s) Gigabyte RTX 3080, Asus 6800M
Storage Adata SX8200 1TB NVME/WD Black 1TB NVME
Display(s) Dell 27 Inch 165Hz
Case Phanteks P500A
Audio Device(s) IFI Zen Dac/JDS Labs Atom+/SMSL Amp+Rivers Audio
Power Supply Corsair RM850x
Mouse Logitech G502 SE Hero
Keyboard Corsair K70 RGB Mk.2
VR HMD Samsung Odyssey Plus
Software Windows 10
if the cards were 1920sp I think all of us would be happy because they would solidly compete with the 580 from the looks but they don't but what bugs me is that its not in the realm of 5970 performance and amd had a track record of performing on par with their previous dual gpu's and this dosen't therefore I say its not a new gen, its a poc for VLIW4
 

Thatguy

New Member
Joined
Nov 24, 2010
Messages
666 (0.14/day)
They have a much more efficient front end design. Something doesn't add up.


Not really IMO. Drivers will improve performance, but probably not by much, they won't change the overall performance by much. VLIW5 to VLIW4 doesn't necessarily mean much better performance. Cayman having 20% more SIMDs than Cypress means that 20% improvement really is the most that it can aspire to achieve, but some games (or certain aspects of the engine) could actually use VLIW5 architecture to the fullest and in these cases Cayman == Cypress. Average out the occasions in which VLIW5 was not fully utilized to the ones where it was actually used and you'll probably end up with the numbers we are seing**.

Just like Cypress was innefficient with 20 SIMDs, Cayman is just as "inneficient", although it has 24 SIMDs. Hence efficiency is actually better, but not on the levels that Barts can achieve. That's as much as they can posibly do without a real new architecture. People confused the higher ALU utilizaion rate derived from the lower number of SIMDs in Barts with higher efficiency and that was not the case. Apparently there's just so many SPs that AMD's already aging architecture can handle. This is why SP numer remains the same, although the issue rate has been raised by 20% by going to VLIW4. I've been saying that since Barts released and no one believed me. Cayman could have fixed this, by duplicating the only thing that Cypress had not duplicated yet (triangle setup), but maybe some here in TPU remember that I pronosticated that that might not be the case.

My explanation was that even with one triangle setup engine the peak triangle output was way higher than what we see in games. Even with massive tesselation, the actual triangle rate never exceeds a 25% of the theoretical 850 billion triangles second that Juniper/Cypress can achieve. IMO doubling the setup engine would not increase performance except for doubling the availability. Availability can still be imprtant so you can see tangible improvements, but certainly not 2x (more like +10-20%). That of course is different with high tesselation, then of course a close to 2x increase is expected.

I don't know if I was/am right, only time will tell, but Cayman benchmarks sure look like backing up my thoughts for the time being.

** Back in the days of HD3000 several people at beyiond3D and gamedev.net reported through actual testing that actual average ALU usage in games was between 3.5 and 4.5 out of 5, which means that 4 does work out as a better number than 5 (its the average). Unfortunately in those cases where games used less than 3.5 VLIW 4 is not that much better than VLIW 5, and in those occasions in which it was 4.5 VLIW5 was actually faster. In general VLIW4 > VLIW5 but not by much and not always.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
They have a much more efficient front end design. Something doesn't add up.

How so? Some parts have been doubled (the few ones that were not already doubled in Cypress), that's it that's all the improvements that Cayman brought to the table.

There could be some internal improvements and yes they said it would be much better and many other things, like for example that HD6970 is 20% faster than GTX480 in the slides, which is obviously false. They said Cayman would have a better perf/watt and perf/mm^2 than Evergreen which is obviously false. What makes you think any of the remainder claims are true?
 

Thatguy

New Member
Joined
Nov 24, 2010
Messages
666 (0.14/day)
How so? Some parts have been doubled (the few ones that were not already doubled in Cypress), that's it that's all the improvements that Cayman brought to the table.

There could be some internal improvements and yes they said it would be much better and many other things, like for example that HD6970 is 20% faster than GTX480 in the slides, which is obviously false. They said Cayman would have a better perf/watt and perf/mm^2 than Evergreen which is obviously false. What makes you think any of the remainder claims are true?

doesn't it seem odd they missed the target by so much ????? I will await new drivers and see what that brings. I think they are having compiling issues.
 

AltecV1

New Member
Joined
Jan 1, 2009
Messages
1,286 (0.23/day)
Location
Republic of Estonia
Processor C2D E8400@3.6 ghz
Motherboard ASUS P5KPL-AM
Cooling Freezer 7 pro
Memory Kingston 4GB 800Mhz cl6
Video Card(s) Sapphire HD4850 700/1000 + Accelero S1 Rev. 2
Storage WD 250 GB AAKS
Display(s) 22" Samsung SyncMaster 226BW
Audio Device(s) int.
Power Supply Forton Blue Storm II 500W
Software Windows 7 64bit Ultimate
GTX 580 and 570 are better performers. Won in the high end single gpu segment battle.



We can't call it a new generation.

is GTX 500 series new ?
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
doesn't it seem odd they missed the target by so much ????? I will await new drivers and see what that brings. I think they are having compiling issues.

I don't believe in massive improvements with drivers, I never expect them and history has demostrated I'm right over and over again.

They will improve. But not by so much. 5%? Sure. And a total of 20% when it's EOL in 12 months from now? Sure.

10-20% in the next 1-2 months. I don't think so. No way. They've had months to improve them, 2 more months than planned after the delay. What are they going to improve in 1 month that they couldn't improve in the 6 months prior to release? We knew nothing about the cards, but AMD had working samples for months and simulations for over a year. Heck if we have to believe Anandtech in what he says in his review about the developing story of Cayman (and we should since he is friend of high AMD ranks) AMD knew they were going VLIW4 since 2007. That when paired up with the time they had with protptypes, plus the time they had with first silicon back from TSMC monsths ago, plus the time they had planned to spend with actual production chips, plus the 2 months delay that means 2 more months of unexpected time for driver development... they should have done a decent job and if they didn't... FAIL. Fail and they will not fix it in 1-2 months.

ALSO:

HIS Radeon HD 6970 2048 MB
HIS Radeon HD 6970 2048 MB

10.12 give a nice performance boost , as i heard
you heard wrong. check the amd release notes. also no way to retest all the cards in 1 day. 10.12 was released 30 hours ago.
 
Last edited:
Joined
Nov 13, 2007
Messages
10,233 (1.70/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.6/ 5.5, 4.8Ghz Ring 200W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.

I noticed that you mentioned the efficiency of Barts being higher. I think alot of the speculation of the 6900 stemmed from the incredible efficiency that the 6800 parts showed...

hell it seems as if they didnt go for all the extra "features" that 6900 has over barts, and just increased the barts die size and design quantitatively, that we would have a much faster card.
 

Thatguy

New Member
Joined
Nov 24, 2010
Messages
666 (0.14/day)
I don't believe in massive improvements with drivers, I never expect them and history has demostrated I'm right over and over again.

They will improve. But not by so much. 5%? Sure. And a total of 20% when it's EOL in 12 months from now? Sure.

10-20% in the next 1-2 months. I don't think so. No way. They've had months to improve them, 2 more months than planned after the delay. What are they going to improve in 1 month that they couldn't improve in the 6 months prior to release? We knew nothing about the cards, but AMD had working samples for months and simulations for over a year. Heck if we have to believe Anandtech in what he says in his review about the developing story of Cayman (and we should since he is friend of high AMD ranks) AMD knew they were going VLIW4 since 2007. That when paired up with the time they had with protptypes, plus the time they had with first silicon back from TSMC monsths ago, plus the time they had planned to spend with actual production chips, plus the 2 months delay that means 2 more months of unexpected time for driver development... they should have done a decent job and if they didn't... FAIL. Fail and they will not fix it in 1-2 months.

ALSO:

HIS Radeon HD 6970 2048 MB
HIS Radeon HD 6970 2048 MB

actually your wrong, tsmc really screwed them up with the cancelation of the 32nm node. this likely cuased some big issues in the design phase. Its not the chip they wanted to build. so likely the dirver development they had done on protoype silicon was trashed and they had to start all over again. Not all designs scale up well.

so i'd wait till january before condeming these cards. But if they maintain there price points or even drop a bit. We get some good price/per and its still a win for amd as they get to find out the various weakness in the new architecture why they finalize the new design for 28nm.

that siad. I am not buying for anything be it intel,amd,nvidia etc at least 2-3 months so I have time to wait.
 
Joined
May 12, 2006
Messages
11,119 (1.70/day)
System Name Apple Bite
Processor Intel I5
Motherboard Apple
Memory 40gb of DDR 4 2700
Video Card(s) ATI Radeon 500
Storage Fusion Drive 1 TB
Display(s) 27 Inch IMac late 2017
Ok, I'll just answer with this...are you playing PC games on your TV? If so, then I can agree with you.




However... can you guarantee that that ~30FPS is a smoothly rendered framerate, or does the latency between rendered frames differ?


I can tell you, that the display output technology between television sets and monitors differs enough that a PC screen needs to have framrates @ it's 60hz refresh rate, however HD TV's support a 24.97hz refresh that smooths out those frames that monitors lack.


It's not always just about the numbers. You need to examine both how those numbers are generated, and what those numbers translate to further down the line. There is a distinct difference in display technology between the PC platform, and the console platforms, and it's not talked about very often, yet it such that I have real issues playing console games...quite a few console titles give me motion sickness due to lower framerates and other oddities...but the same titles on PC do not bother me at all... because the technology is different.

:toast:


Um no a HD TV and a HD monitor aren't different and the consoles mainly use similar tech to pc's but whatever. Maybe an analog TV but certainly not the TV's most people are play consoles on, 30 frames is more than enough for smooth game play, sustained.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.61/day)
Um no a HD TV and a HD monitor aren't different and the consoles mainly use similar tech to pc's but whatever. Maybe an analog TV but certainly not the TV's most people are play consoles on, 30 frames is more than enough for smooth game play, sustained.

You are missing that Tv's support a 30hz or 24hz mode, but most monitors only support 60hz in thier natural resolution. Some monitors, like my Dell 3008WFP, support the 24hz mode, for blu-ray playback, however, my Dell P2310H and the Dell 3007WFP do not...making anything less than thier natural refresh quite disturbing.


NOT ALL MONITORS SUPPORT THE SAME REFRESHRATES AS TV's...heck, even the AMD driver just recently got 120hz support for 3D applications...so drivers must be considred as well.


For example, while my Dell 3008WFP supports HDTV modes with the odd refresh rates like blu-ray's 24FPS, I must enable those modes within the driver, and set my desktop in that mode as well...the monitor does not automatically switch to the lower refresh mode. Likewise, any app must support v-sync @ the lowered refreshrate before you can even think about getting sustained 30FPS.


I will say though...you said sustained, and that's a big part of it. However, again, when it comes to console games, both the refrsh rate of hte controller and the display are sync'd...and because of things like driver issues, actually getting a sustained framerate in any app is near on impossible...and you can bet those tests were NOT done with v-sync enabled.

On a TV...v-sync is automatic.

And that's what I'm talking about. Sure, 30+ FPS can be fine, but only with v-sync enabled, on the pc platform. And even on the consoles, we still get tearing, because many titles "cheat".
 
Last edited:
Joined
Nov 13, 2007
Messages
10,233 (1.70/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.6/ 5.5, 4.8Ghz Ring 200W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
Um no a HD TV and a HD monitor aren't different and the consoles mainly use similar tech to pc's but whatever. Maybe an analog TV but certainly not the TV's most people are play consoles on, 30 frames is more than enough for smooth game play, sustained.

Maybe if you're playing an RPG or RTS, or you like barely playable framerates. there is a noticeable responsiveness boost at higher FPS. 45FPS is usually money, but 30 is struggling and is only ok as a min. 30 fps constant is a great way to get your ass handed to you in any FPS game.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.61/day)
For me, F1 2010 is the perfect example...I literally get better lap times @ 60FPS than @ 45(1x 5870 vs 2x 5870).

It so bad that I really like the game, but won't play it, because I cannot get 60FPS with my current config.
 

EastCoasthandle

New Member
Joined
Apr 21, 2005
Messages
6,885 (0.99/day)
System Name MY PC
Processor E8400 @ 3.80Ghz > Q9650 3.60Ghz
Motherboard Maximus Formula
Cooling D5, 7/16" ID Tubing, Maze4 with Fuzion CPU WB
Memory XMS 8500C5D @ 1066MHz
Video Card(s) HD 2900 XT 858/900 to 4870 to 5870 (Keep Vreg area clean)
Storage 2
Display(s) 24"
Case P180
Audio Device(s) X-fi Plantinum
Power Supply Silencer 750
Software XP Pro SP3 to Windows 7
Benchmark Scores This varies from one driver to another.
actually your wrong, tsmc really screwed them up with the cancelation of the 32nm node. this likely cuased some big issues in the design phase. Its not the chip they wanted to build. so likely the dirver development they had done on protoype silicon was trashed and they had to start all over again. Not all designs scale up well.

so i'd wait till january before condeming these cards. But if they maintain there price points or even drop a bit. We get some good price/per and its still a win for amd as they get to find out the various weakness in the new architecture why they finalize the new design for 28nm.

that siad. I am not buying for anything be it intel,amd,nvidia etc at least 2-3 months so I have time to wait.

Here is something I found striaght from the horse's mouth:
There really aren't that many apps that are setup bound. You can see that from from any straight geometry tests the dual geometry engnes in termsof setup are working fine. There are more improvements to come from the drivers with regards to the tessellation changes though.
source




Not quite. R520 hd about a year of software bringup; Cayman, on the other hand, only taped out in May. The amount of exposure to this new architecture is a lot less due to these schedules.
source

There is something wrong with the drivers. There is no conspiracy about it. However, that's not an excuse to not be ready on their own release day!
 
Joined
Feb 24, 2009
Messages
3,516 (0.63/day)
System Name Money Hole
Processor Core i7 970
Motherboard Asus P6T6 WS Revolution
Cooling Noctua UH-D14
Memory 2133Mhz 12GB (3x4GB) Mushkin 998991
Video Card(s) Sapphire Tri-X OC R9 290X
Storage Samsung 1TB 850 Evo
Display(s) 3x Acer KG240A 144hz
Case CM HAF 932
Audio Device(s) ADI (onboard)
Power Supply Enermax Revolution 85+ 1050w
Mouse Logitech G602
Keyboard Logitech G710+
Software Windows 10 Professional x64
Not really IMO. Drivers will improve performance, but probably not by much.................

While I never read that post about the front end needing improvement I would completely agree with you. As I think it was Charlie that first suggested but it appears that Cayman is using the new shaders from the architecture overhaul while just fixing some front end stuff to help fix DX11 performance. It should be interesting to see how the front end is updated when Southern Islands comes out.

I think that is an easy fact to realize when you consider that nVidia fitting GF104/114 with 4 schedulers that are capable of thread level parallelism and are enough to feed: 48 shaders, 16 Load/Store, 16 Interpolation load/store, 8 SFU, and 8 Texture (or another way 7 groups of blocks). I'm going to guess that an updated dispatch engine is needed for AMD to fully realize the number of shaders and texture units they're grouping under a single engine.

Also the 3.5 that Beyond3D came up with seems to be very accurate even that long ago as in the PCPer article they state that AMD stated the average to be a little less (3.2 on average I think).

edit: Eastcoasthandle, I don't know why in the world he is addressing the geometry setup as that's not the problem I see. The problem lies in being able to setup and send enough to the shaders to take advantage of them.
 
Top