Thursday, September 16th 2010

AMD ''Barts'' GPU Detailed Specifications Surface

Barely a week after pictures of AMD's "Barts" prototype surfaced, it wasn't long before a specifications sheet followed. The all-important slide from AMD's presentation to its add-in board partners made it to sections of the Chinese media. "Barts" is a successor to "Juniper", on which are based the Radeon HD 5750 and HD 5770. The specs sheet reveals that while indeed the GPU looks to be larger physically, there are other factors that make it big:

Memory Controller
Barts has a 256-bit wide memory interface, which significantly increases its pin-count, and package-size. The "Pro" and "XT" variants (which will go on to be HD 6x50 and HD 6x70, respectively), have memory clocked at 1000 MHz and 1200 MHz, respectively, so that's nearly 100% increase in memory bandwidth .

Tiny increase in SIMD count, but major restructuring
Compared to Juniper, there seems to be an increase of only 20% in stream processor count physically. The XT variant has 960 stream processors, while the Pro variant has 800. AMD specifically mentioned SIMD block count, (10 enabled for Pro, 12 enabled for XT). If you noticed the slide, it says that the GPU is based on the "Cypress Dual Engine architecture", meaning that these 10 and 12 SIMD units will be spread across two blocks of 5 (Pro) or 6 (XT) SIMDs each, just like Cypress had two blocks of 10 SIMDs each.

Other components
The Raster Operations unit (ROP) count has been doubled to 32, TMUs stand at 40 for the Pro and 48 for the XT.

The design methodology is extremely simple. Juniper-based graphics cards anyway carry 8 memory chips to meet up to memory amount requirements of 1 GB using market-popular 1 Gbit GDDR5 chips, so why not just place those 8 chips across a 256-bit wide memory interface and double the memory bandwidth. The increased ROP count, coupled with up to 20% increase in shader compute power gives Barts the competitive edge it needs to face NVIDIA's reinvigorated GeForce 400 series after the introduction of the GeForce GTX 460. As for power draw, AMD projects the Pro variant to draw less than 150W, with the XT drawing "over" 150W.

Market Positioning
AMD doesn't have huge expectations from this. It has its task cut out: to compete with the GeForce GTX 460 768 MB and 1 GB models. While memory count ROP made the cut out NVIDIA's variants, AMD's come from clock speeds and SIMD core counts. It should then become obvious what these GPUs' pricing should look like.

When?
Usually when AMD gives out such a presentation to its AIB partners, a market release is about 3 months away.

Source: ChipHell
Add your own comment

110 Comments on AMD ''Barts'' GPU Detailed Specifications Surface

#1
cheezburger
by: yogurt_21
20% bump in shaders ok, doubling the rop's ? that seems unlikely, would be awesome, but unlikely. It's far easier to add shaders than it is to add rop's *unless* we're looking at a cripled cypress core here with a new name.
more shader dont always necessary provide more performance, unless you're a big fan of unreal 3 engine....adding more shader is much easier for hardwiring and die design but also it can easily increase die space and also generate MORE HEAT on same die area. you would wonder why a 256mm^2 r770 being so ridiculously hot compare to 240m^2 older process of g94...... so do not expecting amd going to just increase shader number like r670 to r770. ridiculous number of shader don't help performance....it's rops, z buffer, data bus width and shader architect we're talking about. in extreme case even giving 3200 shader to cypress will still unable to keep pace with g100 but generate more heat and eventually end up to be another 2900xt....
Posted on Reply
#2
cadaveca
My name is Dave
by: largon
I smell a burger full o' crap here.
:laugh:


That's what happens when people speculate...nobody should be taking any of this seriously.


Wait a minute, I already said that. Funny...



:D
Posted on Reply
#3
cheezburger
by: largon
I smell a burger full o' crap here.
i smell nvidia fan here too :D
Posted on Reply
#4
cadaveca
My name is Dave
I actually smell someone who usually knows what's up. Smells kinda like success...:laugh:
Posted on Reply
#5
NeSeNVi
by: Semi-Lobster
To be honest, I'm pretty disappointed by the high power consumption. For their, so far, short existence (starting with the revolutionary 4770), the X700 series have been excellent thanks to their low power consumption which was great for entry level gamers but at 150w, that is putting these cards on par with the 5850 while at the same time, not being as good as the 5850, at that rate, you might as well get a 5850 since the prices will drop once the 6000 series hits store shelves.
This is what I wanted to say after reading this news too. Totally agree.
Posted on Reply
#6
Imsochobo
by: CDdude55
My GTX 470's says something completely different.:p:)
Nvidia think diffrent, they aint getting money for a big ass expensive card that have to compete with half as complex cards...
Sorry mate, its not a great videocard.

It may serve you well on performance though, if thats what matters, you got what you want.

Fermi 470 and 480 is rubbish in their current state, but its a generation change! much like 2900.

Amd have done well with effecient designs!

by: cheezburger
more shader dont always necessary provide more performance, unless you're a big fan of unreal 3 engine....adding more shader is much easier for hardwiring and die design but also it can easily increase die space and also generate MORE HEAT on same die area. you would wonder why a 256mm^2 r770 being so ridiculously hot compare to 240m^2 older process of g94...... so do not expecting amd going to just increase shader number like r670 to r770. ridiculous number of shader don't help performance....it's rops, z buffer, data bus width and shader architect we're talking about. in extreme case even giving 3200 shader to cypress will still unable to keep pace with g100 but generate more heat and eventually end up to be another 2900xt....
Ehm, shaders does alot.
Ati have found a very magical ratio number, it have proven to be a well balanced.
The only thing they actually needed vs nvidia... was JUST shader power/ tesselation power. where fermi was superiour, ati have more rops if i remember.

256 bit Is enough for 6870.
192 bit would be enough for 6770 i guess, but yeah, odd memory numbers..

Ati need to just improve tesselation stuff, their new arch may have this, we'll find out with 6xxx and for real with 7xxx.

Excited to see what future holds!
Posted on Reply
#7
CDdude55
Crazy 4 TPU!!!
by: Imsochobo
Nvidia think diffrent, they aint getting money for a big ass expensive card that have to compete with half as complex cards...
Sorry mate, its not a great videocard.

It may serve you well on performance though, if thats what matters, you got what you want.

Fermi 470 and 480 is rubbish in their current state, but its a generation change! much like 2900.
I agree in the aspects that they are not very efficient cards compared to what AMD is currently running with.

But yes, on the performance side of things you are really getting a good treat at a nice price. :)
Posted on Reply
#8
wolf
Performance Enthusiast
by: Imsochobo
It may serve you well on performance though, if thats what matters, you got what you want.
this statement is pure lol to me, performance is always what I look at first, all other considerations are secondary, and in that respect GF100 rocks my socks.

I'd rather consider better performance first than start with pain-in-the-ass things like power consumption and heat. assuming you've built a good enough rig to handle throwing in high end cards.
Posted on Reply
#9
CDdude55
Crazy 4 TPU!!!
by: wolf
this statement is pure lol to me, performance is always what I look at first, all other considerations are secondary, and in that respect GF100 rocks my socks.

I'd rather consider better performance first than start with pain-in-the-ass things like power consumption and heat. assuming you've built a good enough rig to handle throwing in high end cards.
Exactly.:toast:
Posted on Reply
#10
Imsochobo
by: CDdude55
Exactly.:toast:
hehe, was my previous goal for me, performance.
Now i run 5850, and loaning a 2nd one.
Tried the 470, overheated in my microatx... and the hdmi sound was horrible...

the 2nd is a must for me, so ati is onto something, only way i see it is that nvidia will be swollowed by someone, much like ati, erm amd. :)
But the heat could be solved in some way i guess. and noise when watching movies was just, not good at all, 5850 was pretty much spot on for me :)
Bought it at launch, price now i 33% higher, so I'm a very satisfied costumer! I thought i would regret it, but nope ! :D

Nvidia is focusing way too much on cuda, instead opencl and its performance should be what they should go for.
Physx, isnt that much worth really, used to have a geforce in my pc for it, but got used maybe once every 3rd month, so whats the point.

I just hope opencl will be taking off, Coding for it is quite easy in fact, so dont see any dont's for it.
And we can enjoy the apps for both AMD and Nvidia gpu's!

opencl, fusion, sandy, dx11, lots of things in movement now that benefit us all.
Anyways, back on track here.. ati is really pushing out quickly! I think this may be because of the problems with some artifacts in some systems with HD5xxx.
Mouse pointer for example with multi-display, i have the problem in starcraft 2 ever now n then, not a biggie, its just a green line, but after a min it returns to normal.
Posted on Reply
#11
laszlo
hmmm i smell a bart fart?
Posted on Reply
#13
wolf
Performance Enthusiast
by: TheLaughingMan
This article is just plan wrong. I am going to go with that.
sounds like it to me, it doesn't make any sense to differ from their current naming scheme, exactly where the chips fit into it makes little difference tho.
Posted on Reply
#14
cheezburger
by: Imsochobo

Amd have done well with effecient designs!



Ehm, shaders does alot.
Ati have found a very magical ratio number, it have proven to be a well balanced.
The only thing they actually needed vs nvidia... was JUST shader power/ tesselation power. where fermi was superiour, ati have more rops if i remember.

256 bit Is enough for 6870.
192 bit would be enough for 6770 i guess, but yeah, odd memory numbers..

Ati need to just improve tesselation stuff, their new arch may have this, we'll find out with 6xxx and for real with 7xxx.

Excited to see what future holds!
how does shader done well in performance? sorry to disappoint you but most of modern game are rop/z buffer hungry than shader. like i mention before the only game engine that require shader like amd's is unreal 3 and some crappy console game like halo. crysis/stalker/modern warfare are require more rops/data bus power than amd's 5D shader. of cause you're look up 4xxx series because they are cheap however they don't do very well on native PC game as they only take advantage on some console migration game(HAWK is also xbox migration....but this will end up another PC/console war so i'd stop here). also you wont get any extreme high 100+ frame rate in some highest setting and only stuck with "reasonable" frame rate of 30s due to lack of rops in r770 and poor data rate per rops. don't tell me you can just keep clock up and up with core frequency and GDDR5.

big card don't make profit? then what makes profit? people that don't use CAD and play video game wouldn't even bother install a graphic card. console gamer wouldn't buy a graphic card as well as their PC are not built by them selves and not use it for game purpose(doh they are console gamer....) entry gamer would rather have a laptop and play sims and other homosexual-like games. sorry sir intel took these part greatly......the only part left for both NV and AMD is high end gamer and professional user. would you spend 200 bulks on a card that only work great on console migration games or a card cost $400 but convertible to any games. you can say what ever you want about how shitty g100 are but they done pretty well job squash cypress in many games despite it runs more power. so what these gamer wouldn't even care about pale bear and global warming anyway! most of people wouldn't care this planet even if it dies.....anyway...

you guys kept talking about tesellation but you have no idea about the structure design. unlike fermi's tessellation that was integrated in their cuda core amd's design is on the rops(look the die picture...)!!the trade off for this opposite design is cypress's smaller die. the only way to improve this is increase rops or redesign the tessellation engine. data bus can also affecting tessellation performance. result cypress wasn't even 1/10 of g100 in heaven benchmark. how do you improve tessellation without increase something or a major redesign? keep r600 architecture will be chronic suicidal...

again cayman will be 512bit 64 rops and cost 600+ dollars either you like it or not.......
Posted on Reply
#15
largon
One fact about Cayman:
There's at least one person here who will be grossly disappointed in it due to unreasonable expectations.
Posted on Reply
#16
bear jesus
by: largon
One fact about Cayman:
There's at least one person here who will be grossly disappointed in it due to unreasonable expectations.
Very true :laugh:

I just hope it is not me *wishes really hard for 256 bit gddr5 at 1600mhz (6400mhz effective), 64 rop's, 96 tmu's, 1920 stream processors and 850mhz core* :D
Posted on Reply
#17
cheezburger
by: largon
One fact about Cayman:
There's at least one person here who will be grossly disappointed in it due to unreasonable expectations.
by: bear jesus
Very true :laugh:

I just hope it is not me *wishes really hard for 256 bit gddr5 at 1600mhz (6400mhz effective), 64 rop's, 96 tmu's, 1920 stream processors and 850mhz core* :D
it was reported that the benches photo of HD68xx from chiphell was actually barts xt rather than cayman that was rename from previous coded name hd 6770 and the photo was date in late july to early august . which they didn't even have cayman yet back then so stop saying that cayman will be 256bit bus because we don't even know what cayman will bring to us. some speculation of 2560:160:32, 256bit bus and 6.4GT GDDR5 ram is complete wrong...like some speculation of barts earlier which is also full of false BS. (1600:80:16 + 128bit bus...my ass) you cannot increase shader like what r670 to r770 anymore but some idiot just don't get it...shader dont do that much in native PC game. especially amd's non efficient 4+1 shader no matter how much you putting in the die it won't work as great you think and then you people will start whining about how nv doing dirty trick in competition blah blah blah. they just do much better by putting everything they can that's all...you're never explain why amd's card can perform so close with relatively higher priced nv card on console game that's because most of console migration are came from xbox and xbox game favor amd's 5d shader with overdose lighting effect on the texture surface with poor detail and limited frame rate... what's bulk for bang? i never play any console migration title and these low quality game are the reason why amd card sell like hot cake. console game will destroy the future technology invention and this will happen soon enough.
Posted on Reply
#18
bear jesus
by: cheezburger
it was reported that the benches photo of HD68xx from chiphell was actually barts xt rather than cayman that was rename from previous coded name hd 6770 and the photo was date in late july to early august . which they didn't even have cayman yet back then so stop saying that cayman will be 356bit bus because we don't even know what cayman will bring to us. some speculation of 2560:160:32, 256bit bus and 6.4GT GDDR5 ram is complete wrong...like some speculation of barts earlier which is also full of false BS. (1600:80:16 + 128bit bus...my ass) you cannot increase shader like what r670 to r770 anymore but some idiot just don't get it...
I never said it would be any size or spec, i just keep saying i am hoping, wishing or dreaming of random spec and basing my wishes and dreams on the fact that it could be double the barts spec listed on here as for multiple generations the high end has been double the mid rage basicly, although i admit i copy pasted from the wrong post of mine. it should have said this

"I just hope it is not me *wishes really hard for 512 bit gddr5 at 1600mhz (6400mhz effective), 64 rop's, 96 tmu's, 1920 stream processors and 850mhz core* :D"
and that is hardly serious, there is no logical reason to use 1600mhz gddr5 with a 512bit bus unless the core is so crazy powerful it would need that much bandwith and i very much doubt that
Posted on Reply
#19
bear jesus
by: cheezburger
console game will destroy the future technology invention and this will happen soon enough.
I have to ask what is it with you and consoles and console games, im not a fan of consoles and don't own any but i don't complain about it :p

There is many many many fail pc games with low quality everything, does that mean that pc gaming is killing pc gaming and is destroying the future of technology? :p

Just curious how relative anything about consoles is to a thread about an upcoming gpu's spec.(not trying to be an ass or anything just curious)
Posted on Reply
#20
CDdude55
Crazy 4 TPU!!!
by: largon
One fact about Cayman:
There's at least one person here who will be grossly disappointed in it due to unreasonable expectations.
Agreed.
Posted on Reply
#21
wolf
Performance Enthusiast
by: CDdude55
Agreed.
thats for sure... its better to have modest hopes and be pleasantly surprised by a card than to pull awesome numbers from nowhere and get dissapointed if it doesn't happen, awesome though they may be.

side note CDdude, your eligible for a custom title bro. :toast:
Posted on Reply
#22
cheezburger
by: bear jesus
I have to ask what is it with you and consoles and console games, im not a fan of consoles and don't own any but i don't complain about it :p

There is many many many fail pc games with low quality everything, does that mean that pc gaming is killing pc gaming and is destroying the future of technology? :p

Just curious how relative anything about consoles is to a thread about an upcoming gpu's spec.(not trying to be an ass or anything just curious)
console game took over the market is not just happen yesterday. since crysis we haven't see any hardware killing title for 3 years...yes three years! heavy weight title is the reason that push hardware progression further. from doom 3, fear to crysis they created many of legendary hardware such as athlonx2, nv43, core 2 duo and g80. it push the technology far beyond. this is until console migration came out and start taking over on these casual market who own a f***ing dell pc and destroy hardcore gaming and future development of hardware. maybe there are only 1% of high end power user compare to 99% general average joe. that 1% are what push what technology we know today.

console migration is why make average user not to upgrade their part or buy cheap gpu that does not perform better. for example a low graphic quality console title will make a sub 200 bulks amd card(r770) have 100 fps while a $400 nv's card(gt200) offering 200+fps. however problem comes. average people will not see the dfference in performance gap and rather satisfy on minimum "playable" framerate. which result amd card sells better because average casaul gamer don't need high end gpu and enjoy fps that is higher than 30fpsreasonable "performance" and don't need $400 dollor card than can push 200+fps. result high end technology going backward... we see the game hardware requirement is totally NO different from 4 years ago. our technology had stay the same for 4~5 years!! and these average idiot is what cause it. denied nvidia and denied any possibility of cayman/barts/new architecture is denied future invention. you also denied your own future! :shadedshu
Posted on Reply
#23
CDdude55
Crazy 4 TPU!!!
by: cheezburger
console game took over the market is not just happen yesterday. since crysis we haven't see any hardware killing title for 3 years...yes three years! heavy weight title is the reason that push hardware progression further. from doom 3, fear to crysis they created many of legendary hardware such as athlonx2, nv43, core 2 duo and g80. it push the technology far beyond. this is until console migration came out and start taking over on these casual market who own a f***ing dell pc and destroy hardcore gaming and future development of hardware. maybe there are only 1% of high end power user compare to 99% general average joe. that 1% are what push what technology we know today.

console migration is why make average user not to upgrade their part or buy cheap gpu that does not perform better. for example a low graphic quality console title will make a sub 200 bulks amd card(r770) have 100 fps while a $400 nv's card(gt200) offering 200+fps. however problem comes. average people will not see the dfference in performance gap and rather satisfy on minimum "playable" framerate. which result amd card sells better because average casaul gamer don't need high end gpu and enjoy fps that is higher than 30fpsreasonable "performance" and don't need $400 dollor card than can push 200+fps. result high end technology going backward... we see the game hardware requirement is totally NO different from 4 years ago. our technology had stay the same for 4~5 years!! and these average idiot is what cause it. denied nvidia and denied any possibility of cayman/barts/new architecture is denied future invention. you also denied your own future! :shadedshu
You're right to an extent, first off there have been some pretty heavy hardware straining games out for PC int he past three years like Cryostasis, S.T.A.L.K.E.R. and Metro 2033 to name a few.I do think that games these days aren't being tailor made for the PC and are instead directly ported to the PC while leaving behind essentials that are important to us PC gamers. I don't think a game could ''push out new technology'' as you were stating, companies don't go ''OMG F.E.A.R. is coming out soon guys, time to make a chip that can run it!1!!!1'', now of course Intel does pay some of them to feature that whole ''Plays great on Core i7!'' logo in the games (like Crysis), but of course that doesn't mean it's tailor made for that specific game. I think the problem is in the developers themselves, console make more money for them due to the bigger base of people, making a PC version of that game is a total after thought these days. You go for the bigger fish first and then throw the line in later for the smaller ones. No matter where gaming is, technology will always move forward, whether it be a big or littler step, no matter what there is always some kind of ''innovation'' happening. Consoles aren't the problem, it's the developers that don't take the time out to optimize and make a proper PC game.

Also, it depends on the person on what they need or want for gaming, are you really surprised that it's the mainstream cards and mainstream computers and hardware that sell more?, that's what those companies focus on more, because that's where the most profit is, they aren't focused on that tiny percentage that is us. Whether or not someone needs a high end GPU is all choice, does your average gamer need a 5970 with an overclocked i7, do they need 2x GTX 480's?, the needs on an ''average joe PC gamer'' are vastly different from us, the small percentage. Devs see that and realize they can make money off of those people by dumbing down our games and making it so that us, the ''hardcore'' gamers and enthusiasts shunned. And why not shun us?, we barely make them any money anyways, most of us spend more money on our system then we'll even spend on there games anyways. The crappiest sytems and parts make the most profits, the uninformed make the most profits for them.
Posted on Reply
#24
wahdangun
by: cheezburger
it was reported that the benches photo of HD68xx from chiphell was actually barts xt rather than cayman that was rename from previous coded name hd 6770 and the photo was date in late july to early august . which they didn't even have cayman yet back then so stop saying that cayman will be 256bit bus because we don't even know what cayman will bring to us. some speculation of 2560:160:32, 256bit bus and 6.4GT GDDR5 ram is complete wrong...like some speculation of barts earlier which is also full of false BS. (1600:80:16 + 128bit bus...my ass) you cannot increase shader like what r670 to r770 anymore but some idiot just don't get it...shader dont do that much in native PC game. especially amd's non efficient 4+1 shader no matter how much you putting in the die it won't work as great you think and then you people will start whining about how nv doing dirty trick in competition blah blah blah. they just do much better by putting everything they can that's all...you're never explain why amd's card can perform so close with relatively higher priced nv card on console game that's because most of console migration are came from xbox and xbox game favor amd's 5d shader with overdose lighting effect on the texture surface with poor detail and limited frame rate... what's bulk for bang? i never play any console migration title and these low quality game are the reason why amd card sell like hot cake. console game will destroy the future technology invention and this will happen soon enough.
what the hell are you saying, do you know why AMD chose 5D shader ?

its because amd can insert more shader processor and more efficient than nvdia big shader, and IF NATIVE pc games are crap on ati then why oh why crysis run superb on ati than nvdia counterpart ?

and btw i don't want to go back when everything was EXPENSIVE heck i even remember seeing P3 800 mghz cost a whoping $1000, but i want dev to push the hardware more, we want another crysis,
Posted on Reply
#25
cheezburger
by: wahdangun
what the hell are you saying, do you know why AMD chose 5D shader ?

its because amd can insert more shader processor and more efficient than nvdia big shader, and IF NATIVE pc games are crap on ati then why oh why crysis run superb on ati than nvdia counterpart ?

and btw i don't want to go back when everything was EXPENSIVE heck i even remember seeing P3 800 mghz cost a whoping $1000, but i want dev to push the hardware more, we want another crysis,
same on 5D shader, why didn't amd go on 2 complex + 3 simple rather than 4 simple + only one complex? because amd already optimize for console migration that tend to use simple shader instruction on their game engines. they are 5 D alright but most of them(4 simple) would become useless when countering complex instruction and more flexible coding(such as physx or opencl) which left only 1 complex port functional during game play. most of native PC games are far complex coding than consoles. while nvidia's BIG shader are more convertible then amd's 5D in everyway. and about crysis under most of setting even a 4890 having problem out pace 9800gtx+ in every bench( do not bring vapor-x version these so called 1.2ghz super overclock edition that beats stock clock gtx 260....don't give me that shit.....) which amd lost every title in native pc game except on console!! amd's market share is nearly as equal as console migration sells every years that why amd was planing to stay mid range card and wait for console move to next step. that's where most of people prefer "the most profit spot".

and yah, without a $1000 p3 800mhz in 10 years ago you woudn't even have p3 400mhz for cheaper price and there for you won't have any powerful processor like core2 or powerful gpu that can play decent graphic like crysis. maybe your pc is 486 and play fubby island everyday i presume?
Posted on Reply
Add your own comment