Thursday, September 16th 2010

AMD ''Barts'' GPU Detailed Specifications Surface

Barely a week after pictures of AMD's "Barts" prototype surfaced, it wasn't long before a specifications sheet followed. The all-important slide from AMD's presentation to its add-in board partners made it to sections of the Chinese media. "Barts" is a successor to "Juniper", on which are based the Radeon HD 5750 and HD 5770. The specs sheet reveals that while indeed the GPU looks to be larger physically, there are other factors that make it big:

Memory Controller
Barts has a 256-bit wide memory interface, which significantly increases its pin-count, and package-size. The "Pro" and "XT" variants (which will go on to be HD 6x50 and HD 6x70, respectively), have memory clocked at 1000 MHz and 1200 MHz, respectively, so that's nearly 100% increase in memory bandwidth .

Tiny increase in SIMD count, but major restructuring
Compared to Juniper, there seems to be an increase of only 20% in stream processor count physically. The XT variant has 960 stream processors, while the Pro variant has 800. AMD specifically mentioned SIMD block count, (10 enabled for Pro, 12 enabled for XT). If you noticed the slide, it says that the GPU is based on the "Cypress Dual Engine architecture", meaning that these 10 and 12 SIMD units will be spread across two blocks of 5 (Pro) or 6 (XT) SIMDs each, just like Cypress had two blocks of 10 SIMDs each.

Other components
The Raster Operations unit (ROP) count has been doubled to 32, TMUs stand at 40 for the Pro and 48 for the XT.

The design methodology is extremely simple. Juniper-based graphics cards anyway carry 8 memory chips to meet up to memory amount requirements of 1 GB using market-popular 1 Gbit GDDR5 chips, so why not just place those 8 chips across a 256-bit wide memory interface and double the memory bandwidth. The increased ROP count, coupled with up to 20% increase in shader compute power gives Barts the competitive edge it needs to face NVIDIA's reinvigorated GeForce 400 series after the introduction of the GeForce GTX 460. As for power draw, AMD projects the Pro variant to draw less than 150W, with the XT drawing "over" 150W.

Market Positioning
AMD doesn't have huge expectations from this. It has its task cut out: to compete with the GeForce GTX 460 768 MB and 1 GB models. While memory count ROP made the cut out NVIDIA's variants, AMD's come from clock speeds and SIMD core counts. It should then become obvious what these GPUs' pricing should look like.

When?
Usually when AMD gives out such a presentation to its AIB partners, a market release is about 3 months away.

Source: ChipHell
Add your own comment

110 Comments on AMD ''Barts'' GPU Detailed Specifications Surface

#1
cadaveca
My name is Dave
by: Semi-Lobster
but at 150w, that is putting these cards on par with the 5850 while at the same time, not being as good as the 5850, at that rate, you might as well get a 5850 since the prices will drop once the 6000 series hits store shelves.
What if it's faster than 5850? If "rumour" is true, and the shader complexity has changed, those 960 shaders might be far better performing than the current design. Those 960 shaders might be equal to 1920 of today, in certain situations.

ATI's 4+1 shader design might now be 2+2. We might see far higher gpu utilization, and the rumour in the past of a vastly superior "ultra-threading dispatch processor" seems to point more in this direction.

Looking into the past...4770 basically = 3870, and 5770 basically = 4890. So, this 6770, should be somewhere around, in the least, 5850 to 5870 performance, if done right.
Posted on Reply
#2
cheezburger
which most of rumor had fail XD. back to few days ago non any people believe me while making their own specification by just add more shader while keep 16 rops/128bit for Barts. now these people need to grab guns and shoot themselves... anyway Barts is going to be 192ALU/48TMU/32ROPS/256bit bus while cayman will be 384ALU/96TMU/64ROPS/512bit bus. believe or not, amd is heading to high end/professional market.

So much of 7GT GDDR5 256bit bus/ 1920shader/120 tmu/32rops and pricing at $299 LOL

by: bear jesus
Dreams of 6870(or whatever it will be called) spec.

512 bit gddr5 at 1600mhz (6400mhz effective) 64 rop's and 96 tmu's and 1920 stream processors and 1000mhz core speed would be a beautiful card
CAYMAN IS NOT GOING TO HAVE 480 ALU(or 1920 shaders). even in 4D format 4 complexity arrangement. these ALU shader are too costly and cause huge die space. which it will end up like fermi.

6.4GT GDDR5 is also eats more power than lower frequency ram. it would be stupid for amd to make that move....
Posted on Reply
#3
Semi-Lobster
by: cadaveca
What if it's faster than 5850? If "rumour" is true, and the shader complexity has changed, those 960 shaders might be far better performing than the current design. Those 960 shaders might be equal to 1920 of today, in certain situations.

ATI's 4+1 shader design might now be 2+2. We might see far higher gpu utilization, and the rumour in the past of a vastly superior "ultra-threading dispatch processor" seems to point more in this direction.

Looking into the past...4770 basically = 3870, and 5770 basically = 4890. So, this 6770, should be somewhere around, in the least, 5850 to 5870 performance, if done right.
You re right, the only thing I'm not onboard with is the 4890=5770! I've had both and the 5770 is at best as good as the 4870. If you are right though, the next step down, the 6600 series (I wonder if AMD will release a 6660? :laugh:) will hopefully be as good as the 5770 and have lower power consumption
Posted on Reply
#4
cadaveca
My name is Dave
by: Semi-Lobster
You re right, the only thing I'm not onboard with is the 4890=5770! I've had both and the 5770 is at best as good as the 4870. If you are right though, the next step down, the 6600 series (I wonder if AMD will release a 6660? :laugh:) will hopefully be as good as the 5770 and have lower power consumption
Yeah, it's not exact, and 4770 was better than 3870, but 5770 kinda lacks the 4980 grunt, due to 128-bit mem bus.

I think AMD may skip "6600" series, as is too close to old geforce card naming, but maybe they will go with 64x0/65x0.

Late next month real details should be out, so I'm more than happy to wait and see what they bring to the table...

But I'm still sitting here waiting for CrossHair4Extreme for my 1090T, so I will also wait for next spring, and the high-end cards, before making any purchases, no matter how good these cards will be...
Posted on Reply
#5
Semi-Lobster
by: cadaveca
Yeah, it's not exact, and 4770 was better than 3870, but 5770 kinda lacks the 4980 grunt, due to 128-bit mem bus.

I think AMD may skip "6600" series, as is too close to old geforce card naming, but maybe they will go with 64x0/65x0.

Late next month real details should be out, so I'm more than happy to wait and see what they bring to the table...

But I'm still sitting here waiting for CrossHair4Extreme for my 1090T, so I will also wait for next spring, and the high-end cards, before making any purchases, no matter how good these cards will be...
The ATI/AMD naming cycle has been in its current form since the 2000 series, the number system is to inform consumers about the video card's relation with other video cards. 800 is high performance, 700 (which hasn't been around for very long) is more mainstream performance, 600 series is mainstream, 500/400/300 are all budget and the 100 and 000 are USUALLY (but not always) reserved for IGPs. Not using the 600 would leave a weird gap for no reason in the line up. If AMD was going to do something that drastic they would probably prefer to radically change the entire naming system and we all know that this series is going to be the 6000 series
Posted on Reply
#6
cadaveca
My name is Dave
by: Semi-Lobster
The ATI/AMD naming cycle has been in its current form since the 2000 series, the number system is to inform consumers about the video card's relation with other video cards. 800 is high performance, 700 (which hasn't been around for very long) is more mainstream performance, 600 series is mainstream, 500/400/300 are all budget and the 100 and 000 are USUALLY (but not always) reserved for IGPs. Not using the 600 would leave a weird gap for no reason in the line up. If AMD was going to do something that drastic they would probably prefer to radically change the entire naming system and we all know that this series is going to be the 6000 series
Sure, I agree with that, but they introduced the 3870x2...then the 4870x2...but with 5-series, they called the dual cpu card 5970...instead of 5870x2...

For that reason alone, I wouldn't put it past them to go outside long-standing naming conventions...You could even say that now they are AMD as as whole, and not ATi/AMD, anything is possible...
Posted on Reply
#7
cheezburger
by: Semi-Lobster
The ATI/AMD naming cycle has been in its current form since the 2000 series, the number system is to inform consumers about the video card's relation with other video cards. 800 is high performance, 700 (which hasn't been around for very long) is more mainstream performance, 600 series is mainstream, 500/400/300 are all budget and the 100 and 000 are USUALLY (but not always) reserved for IGPs. Not using the 600 would leave a weird gap for no reason in the line up. If AMD was going to do something that drastic they would probably prefer to radically change the entire naming system and we all know that this series is going to be the 6000 series
amd's current naming scheme

x900- dual GPU setup/enthusiast

x800- high end/professional

x700- performance

x600- mainstream

x500~x300- budget
Posted on Reply
#9
cheezburger
by: wolf
id say 256 bit with 1600mhz (6400mhz) GDDR5, 850mhz core is what they seem to like now for XT variants. also I agree 1920 sp's looks likely, but given the Barts core has 32 ROPS, I am very keen to see where they go ROP wise, if its 48 or more you are looking at a serious increase in AA grunt right there, and they will have finally caught if not surpassed Nv on that front.
again, do you have any proof that cayman is going to be 6.4GT GDDR5 with 256bit bus/ and 32rops? because it came from some chinese/korean site that has totally no evidence to define that benchmark was real.

this is what happen when gpuz cant utilize 9600gt



do you think cayman is going to be 256bit because of gpuz error? if Barts is 256bit and half of cayman's spec than there's no reason cayman cant be 64 rops and 512bit bus
Posted on Reply
#10
cadaveca
My name is Dave
by: cheezburger

do you think cayman is going to be 256bit because of gpuz error? if Barts is 256bit and half of cayman's spec than there's no reason cayman cant be 64 rops and 512bit bus
Did the fact that this current info comes from an "official AMD slide" escape you?:shadedshu

ChipHell has been a pretty reliable source in the past.
Posted on Reply
#11
cheezburger
by: cadaveca
Did the fact that this current info comes from an "official AMD slide" escape you?:shadedshu

ChipHell has been a pretty reliable source in the past.
they might be putting barts in the benchmark rather than cayman since cayman prototype hasn't even being out yet. some rumor said it will start test in this month while barts had finished testing back in june. which in most of case they didn't even have cayman yet when they leaked the photo. also we aint really know that 68xx naming postion is for barts or cayman. because according to the same website they also rumor some of hd 6000 MAY BE rebrand from existed hd 5000 line. so who knows? plus chiphell wasn't always correct. like they were predict barts was going to be 1200(5D format) ALU:60TMU and 16 rops with 128bit bus. but today's new is like a palm that slap their face really hard....
Posted on Reply
#12
cadaveca
My name is Dave
Except of course, that they posted the new info, correcting themselves. the benchmarks don't matter...something as simple as a driver change makes benchmarks useless.

they may be making info up...they might be misled, even...it's really so unimportant, I don't understand why you think the sole source of info posting newer info that contradicts thier earlier info, is a bad thing?

Anyway, with only a month or so before launch, none of it matters, as the truth will come out very soon.
Posted on Reply
#13
cheezburger
by: cadaveca
Except of course, that they posted the new info, correcting themselves. the benchmarks don't matter...something as simple as a driver change makes benchmarks useless.

they may be making info up...they might be misled, even...it's really so unimportant, I don't understand why you think the sole source of info posting newer info that contradicts thier earlier info, is a bad thing?

Anyway, with only a month or so before launch, none of it matters, as the truth will come out very soon.
the only thing dislike them is because they rumor something that hasn't even release with engineering sample which it will end up mislead general audience. using barts in the benchmark instead of cayman is not guilty. however since they already knew the sample they used is not cayman which they shouldn't told people it is cayman that with 256bit bus... but that's all based on IF they knew already.
Posted on Reply
#14
OneMoar
by: cheezburger
the only thing dislike them is because they rumor something that hasn't even release with engineering sample which it will end up mislead general audience. using barts in the benchmark instead of cayman is not guilty. however since they already knew the sample they used is not cayman which they shouldn't told people it is cayman that with 256bit bus... but that's all based on IF they knew already.
SHHH :nutkick:
Posted on Reply
#15
cadaveca
My name is Dave
by: cheezburger
but that's all based on IF they knew already.
Sure, but it's thier reputation, right? Who cares?

Nobody should believe a single thing when it comes to tech rumours, until real, official info comes out, through official channels.

AMD has been playing catch-up since R600 and Phenom I. Both were largely over-hyped, and under-delivered.

All these products are unimportant. They don't really offer anything new...just a bit more added on to what already exists. "Fusion" is where the real future is, and all these products, no matter who is making them, are merely stop-gaps to generate income until they get it RIGHT. And the programming needs work.

To me, it seems that AMD is making the proper moves behind the scenes to prepare for this shift. Since they bought ATI, they have been headed towards a specific goal..and it's not really that close, just yet.

I'm gonna buy a high-end 6-series card. In fact, I'll probably buy 4 or more. But that card isn't even gonna come this year...it doesn't make any sense, business-wise, to do so.

But this 6770, it has to come out. And it's got to be real good. AMD needs to keep nvidia down, and they need a new card to do that. GTX460 is just that good.

In the future, nvidia is screwed in the x86 marketplace. Take a look at thier stock value over the past 8 months, and you'll see that investors agree. AMD is down 36% vs nV's 44% YTD.

Without 32nm, nobody should expect too much, either. If these cards are even 33% faster than 5-series, AMD has done a good job. If it's more than that...AMD really has killed nV.


The few benches that were shown don't say anything in regards to real-world performance. I'll take this info here today though. I mean really now...AMD's own marketing says it all..."The Future is Fusion". Um, Hello?
Posted on Reply
#16
cheezburger
by: cadaveca
Sure, but it's thier reputation, right? Who cares?

Nobody should believe a single thing when it comes to tech rumours, until real, official info comes out, through official channels.

AMD has been playing catch-up since R600 and Phenom I. Both were largely over-hyped, and under-delivered.

All these products are unimportant. They don't really offer anything new...just a bit more added on to what already exists. "Fusion" is where the real future is, and all these products, no matter who is making them, are merely stop-gaps to generate income until they get it RIGHT. And the programming needs work.

To me, it seems that AMD is making the proper moves behind the scenes to prepare for this shift. Since they bought ATI, they have been headed towards a specific goal..and it's not really that close, just yet.

I'm gonna buy a high-end 6-series card. In fact, I'll probably buy 4 or more. But that card isn't even gonna come this year...it doesn't make any sense, business-wise, to do so.

But this 6770, it has to come out. And it's got to be real good. AMD needs to keep nvidia down, and they need a new card to do that. GTX460 is just that good.

In the future, nvidia is screwed in the x86 marketplace. Take a look at thier stock value over the past 8 months, and you'll see that investors agree. AMD is down 36% vs nV's 44% YTD.

Without 32nm, nobody should expect too much, either. If these cards are even 33% faster than 5-series, AMD has done a good job. If it's more than that...AMD really has killed nV.


The few benches that were shown don't say anything in regards to real-world performance. I'll take this info here today though.
agreed, however many ppl in this forum don't have any sense of rations and easily roll over by romurs. (yeah, so much of 480 ALU for cayman... with only 256bit bus and 32 rops..)

right now unless nvidia can come out another revolutionary architecture, like amd does at this moment, or else they can only hope 28nm fab as soon as possible. since gtx 460 is already far larger than cypress. i dont think they can add anymore feature on it like amd did with cayman/barts. not until nvidia get rip of these bulky shader first and finally start over...but if barts is already outperform gtx 480 in 33% margin i personally doubt NV has any hope on current 40nm fab....

PS: hell! cayman is reveal only 10~15% larger than g104 but g104 is far outclassed
Posted on Reply
#17
toyo
It used to be a time when you had to wait until the very last hour to know how a card will perform... and there were sweet surprises... like the HD 4800 series.

From my point of view, the Radeons have a huge disadvantage with their lack of CUDA support. Maybe it will pay off supporting OpenCL, who knows.

And how could AMD let Nvidia get exclusive support from Adobe in Mercury engine? I can't understand. It's like they really want to position their cards as good only for gaming. Wake up, AMD.
Posted on Reply
#18
AthlonX2
HyperVtXâ„¢
So AMD is going to release these first rather than releasing their next top dawg GPU?
Posted on Reply
#19
cadaveca
My name is Dave
by: AthlonX2
So AMD is going to release these first rather than releasing their next top dawg GPU?
Yeah, seems that way. I mean, that's how they used to do it too...new, smaller chip, on the new process, right?

So, same timeframe, but no new process. This means the new gen won't be all it could have been, but that's because of TSMC, not AMD, and effects nV just as hard. I find it hard to fault AMD in this situation.


And if my theory on high-end gpu performance is right, they really need bulldozer before they release a new high-end gpu, and as well so that they release an entire PLATFORM, rather than just a cpu and chipset, and then a gpu.

TSMC threw a big huge wrench in the gpu market, but I can honestly say I saw this coming for years...I have been saying for years that ATI should get away from using TSMC.


Imagine, if AMD had 28nm now, and nV didn't?

:roll:

nVidia really would have to roll over and die. NO x86, no new fab process...AMD kinda missed out on that one.
Posted on Reply
#20
cheezburger
by: cadaveca
Yeah, seems that way. I mean, that's how they used to do it too...new, smaller chip, on the new process, right?

So, same timeframe, but no new process. This means the new gen won't be all it could have been, but that's because of TSMC, not AMD, and effects nV just as hard. I find it hard to fault AMD in this situation.


And if my theory on high-end gpu performance is right, they really need bulldozer before they release a new high-end gpu, and as well so that they release an entire PLATFORM, rather than just a cpu and chipset, and then a gpu.

TSMC threw a big huge wrench in the gpu market, but I can honestly say I saw this coming for years...I have been saying for years that ATI should get away from using TSMC.


Imagine, if AMD had 28nm now, and nV didn't?

:roll:

nVidia really would have to roll over and die. NO x86, no new fab process...AMD kinda missed out on that one.
don't worry, intel is to the back, they want cuda for so long....i wouldn't imagine if intel acquire nvidia and come out a completely steroided fermi II with 22nm fab process.....it will be hell for amd..
Posted on Reply
#21
Sasqui
"Positioned against"...

that implies two things:
  • It will be similar (or better) performance
  • It will be similar (or lower) price
Doesn't that mean having only two competitors really doesn't keep the price in check?
Posted on Reply
#22
cadaveca
My name is Dave
by: cheezburger
don't worry, intel is to the back, they want cuda for so long....i wouldn't imagine if intel acquire nvidia and come out a completely steroided fermi II with 22nm fab process.....it will be hell for amd..
I think Intel would rather let AMD smash nV, and then pick up the bits and pieces later, for alot less cost. AMD would be doing Intel a favor. plus add that I don't think Intel has capacity for nV gpus, so would still be reliant on TSMC.


by: Sasqui
Doesn't that mean having only two competitors really doesn't keep the price in check?
:laugh:





WAIT. You're just figuring this out now?:wtf:

Anyway, I'm hoping for same price.
Posted on Reply
#23
ToTTenTranz
Well, this pretty much confirms the change from the old 5D shaders.
They're probably also bumping the geometry performance, namely DX11 tesselation, along with the new shaders.

Nonetheless, I have no doubt that Bart will be a whole lot smaller than GF104, thus cheaper to produce. Besides, since the HD5830 can be made with a relatively small PCB, I have no doubts this card won't be much bigger than the HD5770.


I do think they could just cut the prices in their current HD5000 line to stupidly low values (their yields should be sky-high by now) while holding off for the 32/28nm process. nVidia's underperforming Fermi architecture would allow them to do that.
Posted on Reply
#24
CDdude55
Crazy 4 TPU!!!
by: ToTTenTranz
nVidia's underperforming Fermi architecture would allow them to do that.
My GTX 470's says something completely different.:p:)
Posted on Reply
#25
20mmrain
Well the thing I can't help but notice is that the 5770 has 800 stream Processors.... and this Card (supposedly the "6770") has 300 Shaders.

Now since I don't really get into the lingo of what means what.... I just know what's the most powerful at the time and how to overclock it well :)

I thought that Stream Processors was what ATI/AMD called their Shader cores correct?

If I have that right.... wouldn't this mean that this has to be new architecture? Considering that the old Stream processors were weaker then Nvidias Shader "Cuda" Cores? Now this card only has a number of 300 compared to the 5770's 800?

If I have this understood correctly.... this will be one hell of a series. We might finally be able to adjust Shader clocks on ATI cards too!? I just can't wait to see what we have instore for this generation.

I will tell you what though. Even if this card is meant to go against the GTX 460.... the DX 11 tessellation on these cards compared to Fermi (If the benchmarks are true) look like this series will leave fermi in the dust and now where to be seen.

I will definetly sell my GTX 460's for a pair of these. If not go even higher up the ladder if the price is right

That's not to mention the 960 Shader version of this card. This thing should be crazy as hell! :rockout:

But to someone out there who said that "We could see this card be on the 5870 levels" I hope so for AMD but I hope not for our sake. Because if this is the case.... we are looking at a mid card for $400 buck each and a top card for $1000 grand or more easy.
Posted on Reply
Add your own comment