Monday, February 9th 2015

Radeon R9 380X Based on "Grenada," a Refined "Hawaii"

AMD's upcoming Radeon R9 380X and R9 380 graphics cards, with which it wants to immediately address the GTX 980 and GTX 970, will be based on a "new" silicon codenamed "Grenada." Built on the 28 nm silicon fab process, Grenada will be a refined variant of "Hawaii," much in the same way as "Curacao" was of "Pitcairn," in the previous generation.

The Grenada silicon will have the same specs as Hawaii - 2,816 GCN stream processors, 176 TMUs, 64 ROPs, and a 512-bit wide GDDR5 memory interface, holding 4 GB memory. Refinements in the silicon over Hawaii could allow AMD to increase clock speeds, to outperform the GTX 980 and GTX 970. We don't expect the chip to be any more energy efficient at its final clocks, than Hawaii. AMD's design focus appears to be performance. AMD could save itself the embarrassment of a loud reference design cooler, by throwing the chip up for quiet custom-design cooling solutions from AIB (add-in board) partners from day-one.
In other news, the "Tonga" silicon, which made its debut with the performance-segment Radeon R9 285, could form the foundation of Radeon R9 370 series, consisting of the R9 370X, and the R9 370. Tonga physically features 2,048 stream processors based on the more advanced GCN 1.3 architecture, 128 TMUs, 32 ROPs, and a 384-bit wide GDDR5 memory interface. Both the R9 370 and R9 370X could feature 3 GB of standard memory amount.

The only truly new silicon with the R9 300 series, is "Fiji." This chip will be designed to drive AMD's high-end single- and dual-GPU graphics cards, and will be built to compete with the GM200 silicon from NVIDIA, and the GeForce GTX TITAN-X it will debut with. This chip features 4,096 stream processors based on the GCN 1.3 architecture - double that of "Tonga," 256 TMUs, 128 ROPs, and a 1024-bit wide HBM memory interface, offering 640 GB/s of memory bandwidth. 4 GB could be the standard memory amount. The three cards AMD will carve out of this silicon, are the R9 390, the R9 390X, and the R9 390X2.
Source: 3DCenter.org
Add your own comment

156 Comments on Radeon R9 380X Based on "Grenada," a Refined "Hawaii"

#76
HumanSmoke
RejZoRWho cares about efficiency really.
You should if you value the viability of the company.
Efficiency at the high end really isn't that pressing a concern - Nvidia built markets share fielding the GT200 and GF100 - although might be a consideration in non-gaming scenarios.
Where you should care about efficiency is that the architecture scales efficiently with the smaller GPUs. As I said earlier, AMD aren't competitive in discrete mobile, and that also reflects on low/mid range OEM builds, where the vast majority of discrete sales happen.
You can say that efficiency doesn't matter (to you), but it just means one more strike against sales and revenue, which means less R&D funding, which means that AMD may not be able to field a top-to-bottom GPU refresh(!). So many things don't seem to matter with AMD - GPU efficiency, enthusiast desktop, the x86 server market....sooner or later the sum total of these "doesn't matter" must indeed matter.
Posted on Reply
#77
64K
CasecutterWouldn't such time you'd be "Sleeping" and wouldn't AMD ZeroCore be in play.
Just saying.
Yes, that would further cut back on watts being used while idling. For simplicities sake I used the idle watts measurements from the cards reviewed on this site.
Posted on Reply
#78
TheHunter
As long as they release Fiji XT aka 390/390X soon. I dont mind if they rebrand Cayman chip for that matter, nvidia has been doing too a lot so not really important.

So you all argue about power consumption with 800w+ PSUs? Really, chill.. Power consumption is overrated. :p


Also there is no TITAN-X or what ever you all like about Titan, GM200 won't have DP so no Titan variant, just Geforce (google Nvidia Japan conference 30-12-2014)..
Talk about Wccf spreading this false Titan-X hype to the max..:shadedshu:
Posted on Reply
#79
Lionheart
arbiterLeast with 680 to 770, Nvidia did clock bumps and made 770 faster, unlike AMD where 7970 to 280x had its clocks lowered.

news articles can bash all they want pointing at past where a company complete screwed up on a product and yes that stock blocker from a 6000 series card was a complete screw up, Pretty bad that you lose 20% performance after 5min of gaming

I am laughing up a storm atm, remember all the AMD fans jumping on rumors and thinking 380x was gonna be a 4096 gcn monster yet its not even close to what they were expecting.
HD7970 - 925mhz GPU / 1375mhz Memory || R9 280x 1000mhz GPU / 1500mhz You were saying? o_O But I'm pretty sure you were referring to the HD7970Ghz edition. Still the 280x traded blows well with the GTX 770 while being slightly cheaper depending on the country you lived in.

Agreed those reference coolers were awful just like the GTX 480's cooler.

I don't see the big deal :wtf: If the 380x was going to be the 4096 monster, pretty sure the 390x would of been the dual GPU config. Just different labels :wtf:
Posted on Reply
#80
Slizzo
xfia

I could see how it would be possible for shrinking gpu's to show many of the same problems. they are loving smaller lith for for mobile devices but perhaps there is bigger hurtles on the high end gpu side of things.
I find it funny people simply looked over this. For good reason too.

You're comparing apples to oranges here, for two different reasons.

1. You're comparing CPU architecture to GPU, which are very different in design.
2. You're comparing a chip produced by Intel's fabs to that of one designed by AMD, but produced at either TSMC or Global Foundries.
Posted on Reply
#81
HumanSmoke
TheHunterAlso there is no TITAN-X or what ever you all like about Titan, GM200 won't have DP so no Titan variant, just Geforce (google Nvidia Japan conference 30-12-2014)..
Why would the name Titan have to be linked to double precision? Why couldn't Nvidia differentiate a Titan from a 980 Ti by higher clocks, higher board power limit, larger vRAM allocation. Just because something happened in a previous series, it doesn't automatically follow that the convention is set in stone. There are plenty of examples where just a difference in clock has divided two completely separate models - two that spring immediately to mind are the HD 3870/3850 and HD 4870/4850.
TheHunterTalk about Wccf spreading this false Titan-X hype to the max..:shadedshu:
WTFtech is all about the hype, whatever flavour. Take it seriously at your peril. I seem to remember that this forum went batshit crazy when WCCF announced AMD's GPUs were going to be 20nm even as people (including myself) attempted to inject some reason by showing that 20nm isn't particularly feasible for large GPUs. If the rumour fits peoples wish list, good luck with trying to dissuade them.
Posted on Reply
#82
arbiter
LionheartHD7970 - 925mhz GPU / 1375mhz Memory || R9 280x 1000mhz GPU / 1500mhz You were saying? o_O But I'm pretty sure you were referring to the HD7970Ghz edition. Still the 280x traded blows well with the GTX 770 while being slightly cheaper depending on the country you lived in.
I was refering to ghz since its pretty much only card people remember of 7970, originally rlsed in janurary, 2 month later in march nvidia dropped the 680 bomb shell on them. In may AMD pushed bios for ghz clocks and new cards.
Posted on Reply
#83
Eric_Cartman
It seems AMD is doubling down on their Fermi 2.0!
Posted on Reply
#84
Casecutter
64KYes, that would further cut back on watts being used while idling. For simplicities sake I used the idle watts measurements from the cards reviewed on this site.
Well there's one site that has at times has shown "monitor-off" power numbers, although anymore I can't recall which. That's site has shown how something like a R9 280 will drop to like 2 Amps, while a 760 is still pulling 7Amps all while doing nothing... It's incredible that isn't factored into the conversation and equation on efficiency.
Posted on Reply
#85
TheHunter
HumanSmokeWhy would the name Titan have to be linked to double precision? Why couldn't Nvidia differentiate a Titan from a 980 Ti by higher clocks, higher board power limit, larger vRAM allocation. Just because something happened in a previous series, it doesn't automatically follow that the convention is set in stone. There are plenty of examples where just a difference in clock has divided two completely separate models - two that spring immediately to mind are the HD 3870/3850 and HD 4870/4850.

WTFtech is all about the hype, whatever flavour. Take it seriously at your peril. I seem to remember that this forum went batshit crazy when WCCF announced AMD's GPUs were going to be 20nm even as people (including myself) attempted to inject some reason by showing that 20nm isn't particularly feasible for large GPUs. If the rumour fits peoples wish list, good luck with trying to dissuade them.
Because that's what a Titan is, a crippled Tesla card for consumers with Double precision, without it its not worthy of a Titan name, simple as.

What's more, nvidia said @ that Japan tech conference there won't be any DP gpu with Maxwell, only with Pascal and then we will see new Tesla/Titans again.
www.kitguru.net/components/graphic-cards/anton-shilov/nvidia-to-speed-up-development-of-graphics-processing-architectures/

Besides these GM200 Geforce 1080gtx? will feature 6Gb vram anyway, so Titan name is irrelevant now if that extra 3gb vram buffer by GK110 made them a little more special.



Anyway OT, can't wait for this FijiXT, really interested in that 3D memory, should make a big change or two at higher resolutions.
Posted on Reply
#86
xfia
SlizzoI find it funny people simply looked over this. For good reason too.

You're comparing apples to oranges here, for two different reasons.

1. You're comparing CPU architecture to GPU, which are very different in design.
2. You're comparing a chip produced by Intel's fabs to that of one designed by AMD, but produced at either TSMC or Global Foundries.
I suppose your right to a certain point but its all silicone and it takes a lot of money and engineering to shrink..
Posted on Reply
#87
HumanSmoke
TheHunterBecause that's what a Titan is, a crippled Tesla card for consumers with Double precision, without it its not worthy of a Titan name, simple as.
The company might choose to use the name in any way it sees fit. Just because the previous Titan had a certain feature set, it doesn't mean that the next is bound by the same criteria. The name is Titan, not Titan Double Precision. You're welcome to your opinion, but please don't represent it as absolute fact.
Personally, I'd like to see the name changed to Zeus (son of Titans) if only to troll AMD rumour lovers, btarunr, and RCoon ;)
TheHunterWhat's more, nvidia said @ that Japan tech conference there won't be any DP gpu with Maxwell
That is incorrect. GM 200 likely has double precision at the same rate of GM 204 ( 1:32 ). Insufficient for Tesla duties, but that is why the rep said that Kepler will continue to be the Tesla option - simply because GK 210's developmenthas been in tandem with GM 200.
Posted on Reply
#88
TheHunter
Titan is a class of its own, not really a high-end reference chip name. but if they do call it Titan, it won't have a Titan like price.

Current Titans cost so much because of FP64 DP, not because of extra 3GB vram.
HumanSmokeThat is incorrect. GM 200 likely has double precision at the same rate of GM 204 ( 1:32 ). Insufficient for Tesla duties, but that is why the rep said that Kepler will continue to be the Tesla option - simply because GK 210's developmenthas been in tandem with GM 200.
Exactly, its improved FP32, but its not FP64. So it can't be used like with GK110 Titan @ FP64 mode.

Btw that GK210 is 2x improved and energy efficient GK110 each with 2496cores.. So its not really Maxwell either :)



What Im also trying to say is, all this means no "absurd" prices for us end-users, from both camps AMD FijiXT and NV GM200, the usual 550-650$/€.
Posted on Reply
#89
Razorfang
HumanSmokeThe company might choose to use the name in any way it sees fit. Just because the previous Titan had a certain feature set, it doesn't mean that the next is bound by the same criteria. The name is Titan, not Titan Double Precision. You're welcome to your opinion, but please don't represent it as absolute fact.
Personally, I'd like to see the name changed to Zeus (son of Titans) if only to troll AMD rumour lovers, btarunr, and RCoon ;)

That is incorrect. GM 200 likely has double precision at the same rate of GM 204 ( 1:32 ). Insufficient for Tesla duties, but that is why the rep said that Kepler will continue to be the Tesla option - simply because GK 210's developmenthas been in tandem with GM 200.
i.imgur.com/68pxm0d.gif
Posted on Reply
#90
HumanSmoke
TheHunterBtw that GK210 is 2x improved and energy efficient GK110 each with 2496cores.. So its not really Maxwell either :)
Didn't say it was. GK 210 has to be Kepler (as should be apparent in the link I supplied). The revised silicon was obviously designed because Maxwell wasn't going to offer full FP64. Just for the record, GK 210 has twice the cache of the GK 110, a significant boost algorithm, and improved double precision performance in addition to the energy efficiency you mentioned - so quite a bit of design reworking, which would account for the length of its gestation (and why the company invested in the R&D if Maxwell wasn't targeting the co-processor market).
TheHunterWhat Im also trying to say is, all this means no "absurd" prices for us end-users, from both camps AMD FijiXT and NV GM200, the usual 550-650$/€.
IF there aren't significant differences between the top GM 200 and the salvage parts, I'd agree. Titan's productivity popularity probably had less to do with FP64 than it did the 6GB of vRAM it carries for content creation duties. I wouldn't be at all surprised if Nvidia leveraged a top price against a 12GB top part. There are already 8GB 290X's, and I'd assume that would spill over to any rejigged SKUs. Assuming Nvidia answers that with 8GB GTX 980's, a 6GB GM 200 looks a little "underspecced" from a marketing viewpoint.
Posted on Reply
#91
Convexrook
When will we have a universal physics support system from both Companies. Games do feel better with Nvidia physics applied to them. I cannot be the only one that wants this from game developers. Something like Havok physics did but GPU based.
Posted on Reply
#92
RejZoR
They just "feel" better, but the aren't better. Especially if you'd know all the shit NVIDIA has been doing to push their PhysX crap (removing entire physics effects from games that used to be done through CPU in other games, basic stuff like smashed glass falling and staying on the ground)

Oh and for general public information, AMD's TressFX works on ALL graphic cards, not just AMD, because unlike NVIDIA's proprietary crap, TressFX works through DirectCompute, which means support on all modern graphic cards.
Posted on Reply
#93
Sony Xperia S
64KIdle power draw for the cards

R9 290X Reference Card 17 watts
R9 290X Lightning 22 watts

GTX 980 Reference 8 watts
GTX 980 Gaming 14 watts
Mistake. :D

GTX 980 8 W;
GTX 970 9 W;

R9 290 16 W;
R9 290X 17 W.

www.techpowerup.com/reviews/Gigabyte/GTX_960_G1_Gaming/26.html

That is exactly double the power consumption and the question is principled....
In multi-display it is even worse.

GTX 980 9 W;
GTX 970 10 W;

R9 290 51 W;
R9 290X 54 W.

:(
ShurikNI can probably find more money than that on the pavement... daily :D
Idling is not an issue. People are grasping at straws...
You are mistaken. It actually shows that something is not working properly
Posted on Reply
#94
HumanSmoke
AMD sort-of-news devolves into flame war. Colour me shocked!
RejZoRThey just "feel" better, but the aren't better. Especially if you'd know all the shit NVIDIA has been doing to push their PhysX crap
FUN FACT #1: ATI looked at buying AGEIA but wouldn't meet the asking price.
FUN FACT #2: Nvidia offered AMD a PhysX licence (after paying the $150 million asking fee to buy AGEIA), but AMD decided to go with HavokFX, because OpenCL gaming was the next big thing. This is the same Havok that required a licensing fee and was supported by exactly zero games.
FUN FACT #3: When the PhysX hack for AMD cards arrived, it was AMD who threw up the roadblock.

So, ATI/AMD couldn't be bothered buying PhysX, couldn't be bothered licensing it once Nvidia purchased it, and actively blocked the development of a workaround that would allow the AMD community from using it. If you have an Nvidia card you can use it. If you have an AMD card, why should you care? AMD certainly don't.
Posted on Reply
#95
xfia
physx.. whats that donkey shit people assign a extra gpu to handle? haha it was never needed but it sure has a nice name.. maybe most of the people at AMD didnt want to feel dirty about putting a logo on something that pc's can just do. are we talking about freesync? got confused there for a sec.
Posted on Reply
#96
arbiter
RejZoRThey just "feel" better, but the aren't better. Especially if you'd know all the shit NVIDIA has been doing to push their PhysX crap (removing entire physics effects from games that used to be done through CPU in other games, basic stuff like smashed glass falling and staying on the ground)

Oh and for general public information, AMD's TressFX works on ALL graphic cards, not just AMD, because unlike NVIDIA's proprietary crap, TressFX works through DirectCompute, which means support on all modern graphic cards.
You do know Tressfx is limited to HAIR. PhysX does a lot more then that, as in how body falls down stairs or when a bullet hits a wall and how pieces of the wall hit the floor. Next time read up on tech before spouting off like you know anything.
HumanSmokeAMD sort-of-news devolves into flame war. Colour me shocked!

FUN FACT #1: ATI looked at buying AGEIA but wouldn't meet the asking price.
FUN FACT #2: Nvidia offered AMD a PhysX licence (after paying the $150 million asking fee to buy AGEIA), but AMD decided to go with HavokFX, because OpenCL gaming was the next big thing. This is the same Havok that required a licensing fee and was supported by exactly zero games.
FUN FACT #3: When the PhysX hack for AMD cards arrived, it was AMD who threw up the roadblock.

So, ATI/AMD couldn't be bothered buying PhysX, couldn't be bothered licensing it once Nvidia purchased it, and actively blocked the development of a workaround that would allow the AMD community from using it. If you have an Nvidia card you can use it. If you have an AMD card, why should you care? AMD certainly don't.
yea sad how so many people forget the fact AMD had their chance to license it a long time ago yet refused and now they create PR that they are/were locked outta it. AMD wants everything for free cause well they don't have money to do it them selves. Nvidia is a business, they're not UNICEF.
xfiaphysx.. whats that donkey shit people assign a extra gpu to handle? haha it was never needed but it sure has a nice name.. maybe most of the people at AMD didnt want to feel dirty about putting a logo on something that pc's can just do. are we talking about freesync? got confused there for a sec.
O i thought we were talking about Mantle for a second there. (cue the AMD fan to claim freesync is the industry standard or mantle is Open source.)
Posted on Reply
#97
TheGuruStud
HumanSmokeAMD sort-of-news devolves into flame war. Colour me shocked!

FUN FACT #1: ATI looked at buying AGEIA but wouldn't meet the asking price.
FUN FACT #2: Nvidia offered AMD a PhysX licence (after paying the $150 million asking fee to buy AGEIA), but AMD decided to go with HavokFX, because OpenCL gaming was the next big thing. This is the same Havok that required a licensing fee and was supported by exactly zero games.
FUN FACT #3: When the PhysX hack for AMD cards arrived, it was AMD who threw up the roadblock.

So, ATI/AMD couldn't be bothered buying PhysX, couldn't be bothered licensing it once Nvidia purchased it, and actively blocked the development of a workaround that would allow the AMD community from using it. If you have an Nvidia card you can use it. If you have an AMD card, why should you care? AMD certainly don't.
Why would AMD give money to those crooks at Nvidia (their arch nemesis no less)? Nvidia locks physx to gpu even though it can all be done on CPU (even for their poor vid card owners)...it's pointless, 100% pointless.
Nvidia are such crybaby bitches that they actively block their cards from using physx when made secondary to an AMD card. You made that point...and it goes against your propaganda!

Obviously, you have no rebuttal against TressFX LOL. Nvidia won't have ANY of this open standard stuff. They'll bankrupt the company before they let it happen. That's how arrogant and greedy they are.

And I know you fanboys are INCREDIBLY butt hurt about Mantle and Freesync. Let me see those tears, baby!
Posted on Reply
#98
xfia
nice one :laugh:
Posted on Reply
#99
arbiter
TheGuruStudWhy would AMD give money to those crooks at Nvidia (their arch nemesis no less)? Nvidia locks physx to gpu even though it can all be done on CPU (even for their poor vid card owners)...it's pointless, 100% pointless.
Nvidia are such crybaby bitches that they actively block their cards from using physx when made secondary to an AMD card.

Obviously, you have no rebuttal against TressFX LOL. Nvidia won't have ANY of this open standard stuff. They'll bankrupt the company before they let it happen. That's how arrogant and greedy they are.

And I know you fanboys are INCREDIBLY butt hurt about Mantle and Freesync. Let me see those tears, baby!
Cause cpu is to slow to do the work, GPU is much faster do the kinda calculations needed for it.

Last i checked Freesync and Mantle are Proprietary CLOSED software for AMD. SO tell us another AMD fanboy blind lie.
Posted on Reply
#100
RejZoR
HumanSmokeAMD sort-of-news devolves into flame war. Colour me shocked!

FUN FACT #1: ATI looked at buying AGEIA but wouldn't meet the asking price.
FUN FACT #2: Nvidia offered AMD a PhysX licence (after paying the $150 million asking fee to buy AGEIA), but AMD decided to go with HavokFX, because OpenCL gaming was the next big thing. This is the same Havok that required a licensing fee and was supported by exactly zero games.
FUN FACT #3: When the PhysX hack for AMD cards arrived, it was AMD who threw up the roadblock.

So, ATI/AMD couldn't be bothered buying PhysX, couldn't be bothered licensing it once Nvidia purchased it, and actively blocked the development of a workaround that would allow the AMD community from using it. If you have an Nvidia card you can use it. If you have an AMD card, why should you care? AMD certainly don't.
FUN FACT. LEGAL REASONS.

And so what if TressFX uis limited only to hair. It does work on ANY graphic card with DirectCompute support. You can't even have PhysX hardware accelerated hair if you just happent o have Radeon...
Posted on Reply
Add your own comment
May 10th, 2024 12:35 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts