• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD ''Barts'' GPU Detailed Specifications Surface

To be honest, I'm pretty disappointed by the high power consumption. For their, so far, short existence (starting with the revolutionary 4770), the X700 series have been excellent thanks to their low power consumption which was great for entry level gamers but at 150w, that is putting these cards on par with the 5850 while at the same time, not being as good as the 5850, at that rate, you might as well get a 5850 since the prices will drop once the 6000 series hits store shelves.
 
but at 150w, that is putting these cards on par with the 5850 while at the same time, not being as good as the 5850, at that rate, you might as well get a 5850 since the prices will drop once the 6000 series hits store shelves.

What if it's faster than 5850? If "rumour" is true, and the shader complexity has changed, those 960 shaders might be far better performing than the current design. Those 960 shaders might be equal to 1920 of today, in certain situations.

ATI's 4+1 shader design might now be 2+2. We might see far higher gpu utilization, and the rumour in the past of a vastly superior "ultra-threading dispatch processor" seems to point more in this direction.

Looking into the past...4770 basically = 3870, and 5770 basically = 4890. So, this 6770, should be somewhere around, in the least, 5850 to 5870 performance, if done right.
 
which most of rumor had fail XD. back to few days ago non any people believe me while making their own specification by just add more shader while keep 16 rops/128bit for Barts. now these people need to grab guns and shoot themselves... anyway Barts is going to be 192ALU/48TMU/32ROPS/256bit bus while cayman will be 384ALU/96TMU/64ROPS/512bit bus. believe or not, amd is heading to high end/professional market.

So much of 7GT GDDR5 256bit bus/ 1920shader/120 tmu/32rops and pricing at $299 LOL

Dreams of 6870(or whatever it will be called) spec.

512 bit gddr5 at 1600mhz (6400mhz effective) 64 rop's and 96 tmu's and 1920 stream processors and 1000mhz core speed would be a beautiful card

CAYMAN IS NOT GOING TO HAVE 480 ALU(or 1920 shaders). even in 4D format 4 complexity arrangement. these ALU shader are too costly and cause huge die space. which it will end up like fermi.

6.4GT GDDR5 is also eats more power than lower frequency ram. it would be stupid for amd to make that move....
 
Last edited:
What if it's faster than 5850? If "rumour" is true, and the shader complexity has changed, those 960 shaders might be far better performing than the current design. Those 960 shaders might be equal to 1920 of today, in certain situations.

ATI's 4+1 shader design might now be 2+2. We might see far higher gpu utilization, and the rumour in the past of a vastly superior "ultra-threading dispatch processor" seems to point more in this direction.

Looking into the past...4770 basically = 3870, and 5770 basically = 4890. So, this 6770, should be somewhere around, in the least, 5850 to 5870 performance, if done right.

You re right, the only thing I'm not onboard with is the 4890=5770! I've had both and the 5770 is at best as good as the 4870. If you are right though, the next step down, the 6600 series (I wonder if AMD will release a 6660? :laugh:) will hopefully be as good as the 5770 and have lower power consumption
 
You re right, the only thing I'm not onboard with is the 4890=5770! I've had both and the 5770 is at best as good as the 4870. If you are right though, the next step down, the 6600 series (I wonder if AMD will release a 6660? :laugh:) will hopefully be as good as the 5770 and have lower power consumption

Yeah, it's not exact, and 4770 was better than 3870, but 5770 kinda lacks the 4980 grunt, due to 128-bit mem bus.

I think AMD may skip "6600" series, as is too close to old geforce card naming, but maybe they will go with 64x0/65x0.

Late next month real details should be out, so I'm more than happy to wait and see what they bring to the table...

But I'm still sitting here waiting for CrossHair4Extreme for my 1090T, so I will also wait for next spring, and the high-end cards, before making any purchases, no matter how good these cards will be...
 
Yeah, it's not exact, and 4770 was better than 3870, but 5770 kinda lacks the 4980 grunt, due to 128-bit mem bus.

I think AMD may skip "6600" series, as is too close to old geforce card naming, but maybe they will go with 64x0/65x0.

Late next month real details should be out, so I'm more than happy to wait and see what they bring to the table...

But I'm still sitting here waiting for CrossHair4Extreme for my 1090T, so I will also wait for next spring, and the high-end cards, before making any purchases, no matter how good these cards will be...

The ATI/AMD naming cycle has been in its current form since the 2000 series, the number system is to inform consumers about the video card's relation with other video cards. 800 is high performance, 700 (which hasn't been around for very long) is more mainstream performance, 600 series is mainstream, 500/400/300 are all budget and the 100 and 000 are USUALLY (but not always) reserved for IGPs. Not using the 600 would leave a weird gap for no reason in the line up. If AMD was going to do something that drastic they would probably prefer to radically change the entire naming system and we all know that this series is going to be the 6000 series
 
The ATI/AMD naming cycle has been in its current form since the 2000 series, the number system is to inform consumers about the video card's relation with other video cards. 800 is high performance, 700 (which hasn't been around for very long) is more mainstream performance, 600 series is mainstream, 500/400/300 are all budget and the 100 and 000 are USUALLY (but not always) reserved for IGPs. Not using the 600 would leave a weird gap for no reason in the line up. If AMD was going to do something that drastic they would probably prefer to radically change the entire naming system and we all know that this series is going to be the 6000 series

Sure, I agree with that, but they introduced the 3870x2...then the 4870x2...but with 5-series, they called the dual cpu card 5970...instead of 5870x2...

For that reason alone, I wouldn't put it past them to go outside long-standing naming conventions...You could even say that now they are AMD as as whole, and not ATi/AMD, anything is possible...
 
The ATI/AMD naming cycle has been in its current form since the 2000 series, the number system is to inform consumers about the video card's relation with other video cards. 800 is high performance, 700 (which hasn't been around for very long) is more mainstream performance, 600 series is mainstream, 500/400/300 are all budget and the 100 and 000 are USUALLY (but not always) reserved for IGPs. Not using the 600 would leave a weird gap for no reason in the line up. If AMD was going to do something that drastic they would probably prefer to radically change the entire naming system and we all know that this series is going to be the 6000 series

amd's current naming scheme

x900- dual GPU setup/enthusiast

x800- high end/professional

x700- performance

x600- mainstream

x500~x300- budget
 
id say 256 bit with 1600mhz (6400mhz) GDDR5, 850mhz core is what they seem to like now for XT variants. also I agree 1920 sp's looks likely, but given the Barts core has 32 ROPS, I am very keen to see where they go ROP wise, if its 48 or more you are looking at a serious increase in AA grunt right there, and they will have finally caught if not surpassed Nv on that front.

again, do you have any proof that cayman is going to be 6.4GT GDDR5 with 256bit bus/ and 32rops? because it came from some chinese/korean site that has totally no evidence to define that benchmark was real.

this is what happen when gpuz cant utilize 9600gt

mobile01-1db5a96e6d857ad1deeb9ee21cae9fab.jpg


do you think cayman is going to be 256bit because of gpuz error? if Barts is 256bit and half of cayman's spec than there's no reason cayman cant be 64 rops and 512bit bus
 
do you think cayman is going to be 256bit because of gpuz error? if Barts is 256bit and half of cayman's spec than there's no reason cayman cant be 64 rops and 512bit bus

Did the fact that this current info comes from an "official AMD slide" escape you?:shadedshu

ChipHell has been a pretty reliable source in the past.
 
Did the fact that this current info comes from an "official AMD slide" escape you?:shadedshu

ChipHell has been a pretty reliable source in the past.

they might be putting barts in the benchmark rather than cayman since cayman prototype hasn't even being out yet. some rumor said it will start test in this month while barts had finished testing back in june. which in most of case they didn't even have cayman yet when they leaked the photo. also we aint really know that 68xx naming postion is for barts or cayman. because according to the same website they also rumor some of hd 6000 MAY BE rebrand from existed hd 5000 line. so who knows? plus chiphell wasn't always correct. like they were predict barts was going to be 1200(5D format) ALU:60TMU and 16 rops with 128bit bus. but today's new is like a palm that slap their face really hard....
 
Except of course, that they posted the new info, correcting themselves. the benchmarks don't matter...something as simple as a driver change makes benchmarks useless.

they may be making info up...they might be misled, even...it's really so unimportant, I don't understand why you think the sole source of info posting newer info that contradicts thier earlier info, is a bad thing?

Anyway, with only a month or so before launch, none of it matters, as the truth will come out very soon.
 
Except of course, that they posted the new info, correcting themselves. the benchmarks don't matter...something as simple as a driver change makes benchmarks useless.

they may be making info up...they might be misled, even...it's really so unimportant, I don't understand why you think the sole source of info posting newer info that contradicts thier earlier info, is a bad thing?

Anyway, with only a month or so before launch, none of it matters, as the truth will come out very soon.

the only thing dislike them is because they rumor something that hasn't even release with engineering sample which it will end up mislead general audience. using barts in the benchmark instead of cayman is not guilty. however since they already knew the sample they used is not cayman which they shouldn't told people it is cayman that with 256bit bus... but that's all based on IF they knew already.
 
the only thing dislike them is because they rumor something that hasn't even release with engineering sample which it will end up mislead general audience. using barts in the benchmark instead of cayman is not guilty. however since they already knew the sample they used is not cayman which they shouldn't told people it is cayman that with 256bit bus... but that's all based on IF they knew already.


SHHH :nutkick:
 
but that's all based on IF they knew already.
Sure, but it's thier reputation, right? Who cares?

Nobody should believe a single thing when it comes to tech rumours, until real, official info comes out, through official channels.

AMD has been playing catch-up since R600 and Phenom I. Both were largely over-hyped, and under-delivered.

All these products are unimportant. They don't really offer anything new...just a bit more added on to what already exists. "Fusion" is where the real future is, and all these products, no matter who is making them, are merely stop-gaps to generate income until they get it RIGHT. And the programming needs work.

To me, it seems that AMD is making the proper moves behind the scenes to prepare for this shift. Since they bought ATI, they have been headed towards a specific goal..and it's not really that close, just yet.

I'm gonna buy a high-end 6-series card. In fact, I'll probably buy 4 or more. But that card isn't even gonna come this year...it doesn't make any sense, business-wise, to do so.

But this 6770, it has to come out. And it's got to be real good. AMD needs to keep nvidia down, and they need a new card to do that. GTX460 is just that good.

In the future, nvidia is screwed in the x86 marketplace. Take a look at thier stock value over the past 8 months, and you'll see that investors agree. AMD is down 36% vs nV's 44% YTD.

Without 32nm, nobody should expect too much, either. If these cards are even 33% faster than 5-series, AMD has done a good job. If it's more than that...AMD really has killed nV.


The few benches that were shown don't say anything in regards to real-world performance. I'll take this info here today though. I mean really now...AMD's own marketing says it all..."The Future is Fusion". Um, Hello?
 
Sure, but it's thier reputation, right? Who cares?

Nobody should believe a single thing when it comes to tech rumours, until real, official info comes out, through official channels.

AMD has been playing catch-up since R600 and Phenom I. Both were largely over-hyped, and under-delivered.

All these products are unimportant. They don't really offer anything new...just a bit more added on to what already exists. "Fusion" is where the real future is, and all these products, no matter who is making them, are merely stop-gaps to generate income until they get it RIGHT. And the programming needs work.

To me, it seems that AMD is making the proper moves behind the scenes to prepare for this shift. Since they bought ATI, they have been headed towards a specific goal..and it's not really that close, just yet.

I'm gonna buy a high-end 6-series card. In fact, I'll probably buy 4 or more. But that card isn't even gonna come this year...it doesn't make any sense, business-wise, to do so.

But this 6770, it has to come out. And it's got to be real good. AMD needs to keep nvidia down, and they need a new card to do that. GTX460 is just that good.

In the future, nvidia is screwed in the x86 marketplace. Take a look at thier stock value over the past 8 months, and you'll see that investors agree. AMD is down 36% vs nV's 44% YTD.

Without 32nm, nobody should expect too much, either. If these cards are even 33% faster than 5-series, AMD has done a good job. If it's more than that...AMD really has killed nV.


The few benches that were shown don't say anything in regards to real-world performance. I'll take this info here today though.

agreed, however many ppl in this forum don't have any sense of rations and easily roll over by romurs. (yeah, so much of 480 ALU for cayman... with only 256bit bus and 32 rops..)

right now unless nvidia can come out another revolutionary architecture, like amd does at this moment, or else they can only hope 28nm fab as soon as possible. since gtx 460 is already far larger than cypress. i dont think they can add anymore feature on it like amd did with cayman/barts. not until nvidia get rip of these bulky shader first and finally start over...but if barts is already outperform gtx 480 in 33% margin i personally doubt NV has any hope on current 40nm fab....

PS: hell! cayman is reveal only 10~15% larger than g104 but g104 is far outclassed
 
It used to be a time when you had to wait until the very last hour to know how a card will perform... and there were sweet surprises... like the HD 4800 series.

From my point of view, the Radeons have a huge disadvantage with their lack of CUDA support. Maybe it will pay off supporting OpenCL, who knows.

And how could AMD let Nvidia get exclusive support from Adobe in Mercury engine? I can't understand. It's like they really want to position their cards as good only for gaming. Wake up, AMD.
 
So AMD is going to release these first rather than releasing their next top dawg GPU?
 
So AMD is going to release these first rather than releasing their next top dawg GPU?

Yeah, seems that way. I mean, that's how they used to do it too...new, smaller chip, on the new process, right?

So, same timeframe, but no new process. This means the new gen won't be all it could have been, but that's because of TSMC, not AMD, and effects nV just as hard. I find it hard to fault AMD in this situation.


And if my theory on high-end gpu performance is right, they really need bulldozer before they release a new high-end gpu, and as well so that they release an entire PLATFORM, rather than just a cpu and chipset, and then a gpu.

TSMC threw a big huge wrench in the gpu market, but I can honestly say I saw this coming for years...I have been saying for years that ATI should get away from using TSMC.


Imagine, if AMD had 28nm now, and nV didn't?

:roll:

nVidia really would have to roll over and die. NO x86, no new fab process...AMD kinda missed out on that one.
 
Yeah, seems that way. I mean, that's how they used to do it too...new, smaller chip, on the new process, right?

So, same timeframe, but no new process. This means the new gen won't be all it could have been, but that's because of TSMC, not AMD, and effects nV just as hard. I find it hard to fault AMD in this situation.


And if my theory on high-end gpu performance is right, they really need bulldozer before they release a new high-end gpu, and as well so that they release an entire PLATFORM, rather than just a cpu and chipset, and then a gpu.

TSMC threw a big huge wrench in the gpu market, but I can honestly say I saw this coming for years...I have been saying for years that ATI should get away from using TSMC.


Imagine, if AMD had 28nm now, and nV didn't?

:roll:

nVidia really would have to roll over and die. NO x86, no new fab process...AMD kinda missed out on that one.


don't worry, intel is to the back, they want cuda for so long....i wouldn't imagine if intel acquire nvidia and come out a completely steroided fermi II with 22nm fab process.....it will be hell for amd..
 
"Positioned against"...

that implies two things:

  • It will be similar (or better) performance
  • It will be similar (or lower) price

Doesn't that mean having only two competitors really doesn't keep the price in check?
 
don't worry, intel is to the back, they want cuda for so long....i wouldn't imagine if intel acquire nvidia and come out a completely steroided fermi II with 22nm fab process.....it will be hell for amd..

I think Intel would rather let AMD smash nV, and then pick up the bits and pieces later, for alot less cost. AMD would be doing Intel a favor. plus add that I don't think Intel has capacity for nV gpus, so would still be reliant on TSMC.


Doesn't that mean having only two competitors really doesn't keep the price in check?




:laugh:





WAIT. You're just figuring this out now?:wtf:

Anyway, I'm hoping for same price.
 
Well, this pretty much confirms the change from the old 5D shaders.
They're probably also bumping the geometry performance, namely DX11 tesselation, along with the new shaders.

Nonetheless, I have no doubt that Bart will be a whole lot smaller than GF104, thus cheaper to produce. Besides, since the HD5830 can be made with a relatively small PCB, I have no doubts this card won't be much bigger than the HD5770.


I do think they could just cut the prices in their current HD5000 line to stupidly low values (their yields should be sky-high by now) while holding off for the 32/28nm process. nVidia's underperforming Fermi architecture would allow them to do that.
 
Back
Top