• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD ''Barts'' GPU Detailed Specifications Surface

what the hell are you saying, do you know why AMD chose 5D shader ?

its because amd can insert more shader processor and more efficient than nvdia big shader, and IF NATIVE pc games are crap on ati then why oh why crysis run superb on ati than nvdia counterpart ?

and btw i don't want to go back when everything was EXPENSIVE heck i even remember seeing P3 800 mghz cost a whoping $1000, but i want dev to push the hardware more, we want another crysis,

same on 5D shader, why didn't amd go on 2 complex + 3 simple rather than 4 simple + only one complex? because amd already optimize for console migration that tend to use simple shader instruction on their game engines. they are 5 D alright but most of them(4 simple) would become useless when countering complex instruction and more flexible coding(such as physx or opencl) which left only 1 complex port functional during game play. most of native PC games are far complex coding than consoles. while nvidia's BIG shader are more convertible then amd's 5D in everyway. and about crysis under most of setting even a 4890 having problem out pace 9800gtx+ in every bench( do not bring vapor-x version these so called 1.2ghz super overclock edition that beats stock clock gtx 260....don't give me that shit.....) which amd lost every title in native pc game except on console!! amd's market share is nearly as equal as console migration sells every years that why amd was planing to stay mid range card and wait for console move to next step. that's where most of people prefer "the most profit spot".

and yah, without a $1000 p3 800mhz in 10 years ago you woudn't even have p3 400mhz for cheaper price and there for you won't have any powerful processor like core2 or powerful gpu that can play decent graphic like crysis. maybe your pc is 486 and play fubby island everyday i presume?
 
Do you seriously want those times back? :wtf:

Because I just read your post and you're almost delusional.
 
ok, first of all, its doesn't matter if the game is console port or not but the real deal was how the dev code their games so even if the game was console port from X-box its doesn't guarantee that the games was better on ati, same as native PC games

just look at the crysis :

crysis_1920_1200.gif


so even stock HD 4870 beat GTX 260.

or look at this hawx :

hawx_1920_1200.gif


even though it console port, it still faster on nvdia card
 
Swag

umm it could be that i have no idea what is going on with the naming but i thought people had been throwing around the idea that barts would be x8xx names as in 6870 and 6850 and the next level up (Cayman?) would be 6970 and 6950 with the top dual chip being a 6990 thus why it made no sense to me why they whould change the naming to something like that... i think i'm just confused by all the rumors and false information floating around as usual before hardware launches.

*edit* i think posting first thing in the morning is not a great idea for me :p the problem is not knowing what the caymen chips will be spec'd is what is confusing me really as normally they have been double the mid range cards in recent years but if they are not this time around i geuss i can accept the new naming makes some sense but if a 5870 beats a 6870 then i would be going back to not understanding the change. i dont neven know where everyone is getting these names from, is there a source?

I do not believe these specs are real. My best guess is that, with 32nm enabling 60% more transistors on the same die area as 40nm, Barts started out as a GPU with 60% more shader clusters than Juniper, and Cayman 60% more shader clusters than Cypress. With the shift from 4 simple + 1 complex shader clusters in Evergreen to 4 moderate complexity shader clusters in Northern Islands, Cayman had 2048 shaders versus Bart's 1024 shaders. This change, together with the tessellation improvements, resulted in the die size of Cayman growing to about 400mm2 with 32nm. With the cancellation of 32nm, NI had to be implemented at 40nm, which would have resulted in NI being over 600mm2. This wouldn't be a problem for a single GPU, but would have made a dual GPU variant of the high end Cayman too hot. Thus Cayman was reduced from 2048 shaders to 1280 shaders. But Barts remained at 1024 shaders. After all, Barts was 80% of Cayman, which was the same ratio as the 40nm 4770 versus the 55nm 48xx(RV770), and the 32nm 5790 versus the 40nm 58xx(Cypress).

With Barts being 80% of Cayman, which is just South of 400mm2, Barts is nearly the same die size as Cypress. Best guess is that Barts is about 95% of the die size of Cypress. Originally Cayman LE was to replace the low yield Radeon HD 5870, with the highest binning Barts having 14 of 16 execution blocks active and clocked at 900 MHz, making it tolerant of up to two defects, and sufficiently high yielding despite its high clock rate. The performance per shader of Barts was 1.5x to 1.8x the performance per shader of Cypress depending on the application, with Barts showing the smallest improvement where Cypress is strongest relative to GF100, and the largest improvement where Cypress is weakest relative to GF100. Together with the bump to 900 MHz, the original Barts XT (Radeon HD 6770 with 896 shaders @ 900 MHz) would have delivered from 1.16 to 1.39 times the performance of the Radeon HD 5850 it would have been replacing.

Turks would have 512 shaders versus Barts XT's 892 shaders providing the same shader ratio as the GF106 versus the Geforce GTX 460. Then nVidia threw a curve ball. GTS 450 would ship at 783 MHz versus just 675 MHz for GTX 460. Since Turks can't ship at more than 900 MHz, the clock of the Radeon HD 6770 had to be adjusted accordingly to maintain the same performance ratio between the Radeon HD 6670 and the Radeon HD 6770 as between GTS 450 and GTX 460. Thus the clock of the Radeon HD 6770 was dropped to 775 MHz, reducing the Radeon HD 6770 to from 0.997 to 1.197 times the performance of the Radeon HD 5850, but making room for a Barts core with 960 active shaders at 900 MHz to replace the Radeon HD 5870. This new Barts XT would have from 0.953 to 1.143 times the performance of the Radeon HD 5870, thus making the Cayman LE redundant. Thus Barts XT became the Radeon HD 6830, or at least this potential name change was discussed and leaked.

At the same time, it was realized that, while the yield of a fully functional Cayman part would be too low to justify launching a fully functional part single GPU card, dual GPU cards are sold in low enough volume that fully functional GPUs can be used in the dual GPU card. Thus the change from the Radeon HD 6970 to the Radeon HD 6990. Since the high end dual GPU card would have two fully functional GPUs, while the Radeon HD 6870 would have two execution blocks disabled, it didn't make sense to call the dual GPU card a Radeon HD 6970.

Additionally, realizing that the 40nm Barts die would not be significantly smaller than the Cypress die, ATI continued with development of the Cypress derivative 1280 shader Radeon HD 5790 on 40nm to meet excess demand for Radeon Hd 5830 class GPUs. However, realizing that the performance of the Radeon HD 5790 would have to be bumped up if it were to, instead of supplementing the Radeon HD 5830 supply, supplement the Radeon HD 6750 supply, thus reducing or even negating the need to run excess Barts wafers to meet demand for the Radeon HD 6750, ATI decided to change the name of the Radeon HD 5790 to the Radeon HD 5840, and introduce a lower binning Radeon HD 5820 to improve yields.

Thus the rumors for the name changes. The Barts XT will be a 15 execution block part instead of a 14 execution block part, and possibly assume the name of the Radeon HD 6830.
Antilles will have all 20 execution blocks active instead of just 18, and be called the Radeon HD 6990. And what was formerly going to be called the Radeon HD 5790 is going to ship as the Radeon HD 5820 and Radeon HD 5840, even as the original 58xx series is replaced by Barts.

Introductory pricing should be:
Barts XT @ $299
Barts Pro @ $219
Barts LE @ $179
Radeon HD 5840 @ $179
Radeon HD 5820 @ $149
 
Last edited:
ok, first of all, its doesn't matter if the game is console port or not but the real deal was how the dev code their games so even if the game was console port from X-box its doesn't guarantee that the games was better on ati, same as native PC games

just look at the crysis :

http://tpucdn.com/reviews/ASUS/GeForce_GTS_450_TOP_DirectCU/images/crysis_1920_1200.gif

so even stock HD 4870 beat GTX 260.

or look at this hawx :

http://tpucdn.com/reviews/ASUS/GeForce_GTS_450_TOP_DirectCU/images/hawx_1920_1200.gif

even though it console port, it still faster on nvdia card

i called it BS! in previous review NV GT200 gain massively 40% lead on crysis over hd 4890. but that's under nvidia 780 chipset. however under intel chipset they are significantly slow down and make look like hd 4890 took adavntage over NV(intel was f*** on NV for a while so no surprise for such low performance on i5/i7 platform) hawk was amd brand title game and was console ported so i'm not surprise amd would take such lead...
 
(Referring to CDdude55's post, which for some reason, got deleted.)
I'd say he (cheezburger) is in the right place, kinda, but he has a lot of reading to do before his posts are worth reading as at the moment most of what he writes is just horribly inaccurate or blatantly wrong.
 
Last edited:
Alright people, stay closer to the topic. I allow a broad scope for discussion because often interesting things come out of it. Bickering is not one of them.
 
i called it BS! in previous review NV GT200 gain massively 40% lead on crysis over hd 4890. but that's under nvidia 780 chipset. however under intel chipset they are significantly slow down and make look like hd 4890 took adavntage over NV(intel was f*** on NV for a while so no surprise for such low performance on i5/i7 platform) hawk was amd brand title game and was console ported so i'm not surprise amd would take such lead...

are you being sarcastic ?




btw i hope cayman don't turn out to be power hungry monster like fermi, and when exactly cayman get released ? is it around October too?
 
Cheezburger, I imagine if it had a 40% increase then their could of been some driver fiddling to make the game run faster rather than a fair test.

Because I had a 9800gt ( yes I know it's not a gtx) Asus Matrix edition, and well it just didn't get close to my 4890, at all. lol

Especially when I ran my 4890 at 1ghz stock volts :D
 
...in previous review NV GT200 gain massively 40% lead on crysis over hd 4890...

GT200 is a bit vague... this could mean from the scope of a GTX260 192sp model right the way up to a GTX285.

...Because I had a 9800gt ( yes I know it's not a gtx) Asus Matrix edition, and well it just didn't get close to my 4890, at all. lol...

9800GT in reality is a fair bit behind a 4890, even a 9800GTX+/GTS250 are well behind.

-------------------

If Barts XT is as fast or faster than a 5850 they have a winner on their hands IMO
 
Sounds nice and looks to be a refresh of juniper.
 
Back
Top