Monday, February 9th 2015

Radeon R9 380X Based on "Grenada," a Refined "Hawaii"

AMD's upcoming Radeon R9 380X and R9 380 graphics cards, with which it wants to immediately address the GTX 980 and GTX 970, will be based on a "new" silicon codenamed "Grenada." Built on the 28 nm silicon fab process, Grenada will be a refined variant of "Hawaii," much in the same way as "Curacao" was of "Pitcairn," in the previous generation.

The Grenada silicon will have the same specs as Hawaii - 2,816 GCN stream processors, 176 TMUs, 64 ROPs, and a 512-bit wide GDDR5 memory interface, holding 4 GB memory. Refinements in the silicon over Hawaii could allow AMD to increase clock speeds, to outperform the GTX 980 and GTX 970. We don't expect the chip to be any more energy efficient at its final clocks, than Hawaii. AMD's design focus appears to be performance. AMD could save itself the embarrassment of a loud reference design cooler, by throwing the chip up for quiet custom-design cooling solutions from AIB (add-in board) partners from day-one.
In other news, the "Tonga" silicon, which made its debut with the performance-segment Radeon R9 285, could form the foundation of Radeon R9 370 series, consisting of the R9 370X, and the R9 370. Tonga physically features 2,048 stream processors based on the more advanced GCN 1.3 architecture, 128 TMUs, 32 ROPs, and a 384-bit wide GDDR5 memory interface. Both the R9 370 and R9 370X could feature 3 GB of standard memory amount.

The only truly new silicon with the R9 300 series, is "Fiji." This chip will be designed to drive AMD's high-end single- and dual-GPU graphics cards, and will be built to compete with the GM200 silicon from NVIDIA, and the GeForce GTX TITAN-X it will debut with. This chip features 4,096 stream processors based on the GCN 1.3 architecture - double that of "Tonga," 256 TMUs, 128 ROPs, and a 1024-bit wide HBM memory interface, offering 640 GB/s of memory bandwidth. 4 GB could be the standard memory amount. The three cards AMD will carve out of this silicon, are the R9 390, the R9 390X, and the R9 390X2.
Source: 3DCenter.org
Add your own comment

156 Comments on Radeon R9 380X Based on "Grenada," a Refined "Hawaii"

#101
xfia
most cpu's can handle any game fine but not when it only runs on 1 core. physx is shady at best.. it was some what relevant when it got going but they should have just showed developers how to make a game scale across more cores and reduce cpu dependency.
Posted on Reply
#102
ZoneDymo
A new standard in physics would be nice.
Was Havok 2.0 not coming with massive improvements?

PhysX as is is just a joke, mainly because no one should bother with it on any serious level if only Nvidia users can make use of it.
All it is is gimmicky effects here and there, some smoke moving in batman, some flying orbs in borderlands or warframe, whoopdishit yo.
Posted on Reply
#103
arbiter
RejZoRFUN FACT. LEGAL REASONS.

And so what if TressFX uis limited only to hair. It does work on ANY graphic card with DirectCompute support. You can't even have PhysX hardware accelerated hair if you just happent o have Radeon...
Fun Fact: AMD wasn't breaking any laws if they let it go. As long as they didn't provide any support for the hack or promote it, they would been free and clear of any liability.
ZoneDymoAll it is is gimmicky effects here and there, some smoke moving in batman, some flying orbs in borderlands or warframe, whoopdishit yo.
"sarcasm" yea those effects don't make the game look more real as smoke would in real live "/sarcasm"

I am done with this thread, turning in an AMD fanboy thread trying to twist history to make AMD look like the super hero with a can do no wrong persona and Nvidia the super villain.
Posted on Reply
#105
xfia
just way off topic.. its whatev anyway.. us pc gaming nerds gotta argue about something..
Posted on Reply
#106
Frick
Fishfaced Nincompoop
AquinusI care that the 970 has a multi-monitor idle consumption of <5-watts and the 290 is closer to 55-watts. So, yes. People like me, who aren't gaming most of the time (but do still game regularly) but are using multiple monitor for productivity reasons, do care a little bit as power usage as it adds up over time. Is it a huge factor? No. Is it one worth considering? Sure.

Also, higher efficiency would mean lower temps or more overhead for higher clocks which is never a bad thing.
This is a tangent, butit's partly the reason I didn't buy that GTX 570 for €35. 80W in multi monitor mode is awful, and since I'm running old monitors just the GPU and monitors would be about 180W. :(

Anyway, bring on the low/mid level I say! That's where the REAL action is!
Posted on Reply
#107
Ryrynz
RichF28nm?

"The only truly new silicon with the R9 300 series, is 'Fiji.'"

Ok.. Nevermind. At least there will be something that's actually new.

"4 GB could be the standard memory amount."

Awful. 6 GB should be the standard for high-end cards going forward.
I wholeheartedly agree. No doubt they'll know of the demand for them and some will become available at a later stage. Quite a few games managing to break 3GB now so those new cards only just cut it.
Posted on Reply
#108
HumanSmoke
the54thvoidThread has died....
Aye, was there any doubt?

Once the words "rebrand" and "PhysX" got posted it was light the blue touch paper and cue the Mission Impossible theme.
Posted on Reply
#109
bencrutz
HumanSmokeFUN FACT #3: When the PhysX hack for AMD cards arrived, it was AMD who threw up the roadblock.
RejZoRFUN FACT. LEGAL REASONS.
arbiterFun Fact: AMD wasn't breaking any laws if they let it go. As long as they didn't provide any support for the hack or promote it, they would been free and clear of any liability.
ok, this is amusing :roll:
Posted on Reply
#110
HumanSmoke
bencrutzok, this is amusing :roll:
Well, that's what happens when everybody is making a different point I guess. The original point I was making was that ATI (and later AMD) had ample opportunity to acquire PhysX. They simply didn't want it in any way, shape or form...and by all accounts (especially judging by the reaction here), people here don't either (FWIW. its not a must have feature for me either), yet the frothing at the bung Pavlovian response over what is supposed to be a worthless feature nobody wants (least of all AMD) is evident every time that four-letter word is posted.
Posted on Reply
#111
TRWOV
Late to the party. Not much to add except some off-topic in regard to Physx: I can't understand why would nVidia remove support for running Physx on a Hybrid setup :banghead: They cited "support reasons" (read: they won't test nVidia + AMD configs so they can't officially support them) but why not get out a Beta driver or something for Hybrid configurations with a "no support" disclaimer? Or at the very least don't block mods?

Not that it matters much nowadays (can't recall any recent Physx game except for the Batmans) but I resented nVidia a lot back in the day... more so considering that I was one of the suckers that bought the Ageia card back then and nVidia threw us under a bus as soon as they bought them.

/rant
Posted on Reply
#112
HumanSmoke
TRWOVNot that it matters much nowadays (can't recall any recent Physx game except for the Batmans) but I resented nVidia a lot back in the day... more so considering that I was one of the suckers that bought the Ageia card back then and nVidia threw us under a bus as soon as they bought them.
That was on the cards as soon as the ink was dry. AGEIA went belly up because the PPU was too expensive for the feature set. Nvidia wouldn't make the same mistake (as both the original release and the comments alluded toin this news item). FWIW, even if AMD had pulled the trigger on buying AGEIA, the exact same outcome would have eventuated. Remember that ATI/AMD was all about GPU accelerated physics back when it was fashionable (their "Boundless Gaming" initiative). As you say, it matters little now. CPU physics is widely available (Havok, Bullet etc), and more game engines with their own physics engines arrive on occasion.
Posted on Reply
#113
RejZoR
I just wish physics would get standardized under DirectX. This is the only way to move gaming technology and realism further. Because without the unified physics support, physics CANNOT be used as core game element, they can only be used for useless eye candy. That would mean one or another GPU camp wouldn't even be allowed to play the game. If they could standardize everything else, why not physics as well? Dedicated physics API would be great. Something DirectCompute could become, but just didn't...
Posted on Reply
#114
xfia
o_O microsoft doesn't need to implement a useless nv technology.. dx12 will do away with nonsense on how the cpu should be used.. dx11 is already good at it but it will be easier anyway.
Posted on Reply
#115
Krekeris
the54thvoidThread has died....
It was nice to read, well untill arbiter comments. Sad to see such a hardcore fanboys.
Posted on Reply
#116
FordGT90Concept
"I go fast!1!11!1!"
I am disappoint. I was hoping 380X was the card with 4096 stream processors, not 390/390X. The only thing I'm not disappointed about is 380/380x/390/390X are all coming really soon. :D

I'm guessing 390 will go for $400 and 390X will go for $500 or more. If those prices are $100 cheaper than that guesstimate, it'll be a tough choice for me to pick between the two.
Posted on Reply
#117
Sony Xperia S
FordGT90ConceptI am disappoint. I was hoping 380X was the card with 4096 stream processors, not 390/390X. The only thing I'm not disappointed about is 380/380x/390/390X are all coming really soon. :D

I'm guessing 390 will go for $400 and 390X will go for $500 or more. If those prices are $100 cheaper than that guesstimate, it'll be a tough choice for me to pick between the two.
Don't worry, guys, the most important was already said.

Many will be simply skipping anything on this pesky 28 nm process. :(

R.I.P thread! :D
Posted on Reply
#118
Akrian
Those specs seem impressive. But 4gigs of vram? I mean, at 4K I can already hit the wall with current AMD GPUs (running quad r9 290x). Do they plan to use that memory bandwidth to swap out textures so fast that it will solve the issue of hitting the memory limits and stuttering? Isn't it a gable, because it will require a lot of driver optimizations to do it efficiently? And AMD's drivers are lacking in quality department for the past few itterations at least.
Posted on Reply
#119
GhostRyder
Well its funny this thread became very foolish again, seems we cannot have a thread regarding GPU's without resorting to name calling from each of the hardcore fans or resorting in the usual "well my company has (insert feature that I will pointlessly rant about being the best thing since sliced bread) and your doesn't, praise (Insert company) :respect:".

If you guys have a problem with certain people making fanboy comments, ignore the person and move on already otherwise you just make them feel important while they cook up excuses/retorts and that you care which in turn ruins threads.

Back to the topic at hand, the disappointment for this announcement only to me is that the R9 380X is not going to be the next big part/a new part. Though I guess depending on how well they improve/refine Hawaii into Grenada we might see something truly impressive. Though the real chip everyone has their eyes on is the R9 390X and what it brings to the table.
Posted on Reply
#120
jabbadap
Well hawaii has slow memories bundled with 512bit bus, so there's room of improvement(tuning memory controller and supporting faster vrams).

Then of course depending which tmsc manufacturing node they are using, moving to more effiency 28nm node might improve energy consumption(I think nvidia uses 28nm hpc for gm204/gm206 and amd uses 28nm hp? not the same node anyway). So can Grenada based r9-380-series be faster than gtx980/970 sure, but better perf/W very unlikely.
Posted on Reply
#121
THU31
xfia

I could see how it would be possible for shrinking gpu's to show many of the same problems. they are loving smaller lith for for mobile devices but perhaps there is bigger hurtles on the high end gpu side of things.
The problem exists because the die size gets smaller and smaller, as they are not increasing the number of cores. Lynnfield was 290 mm2, Sandy Bridge was 216 mm2, Ivy Bridge was 160 mm2. With Broadwell this will probably get below 120 mm2.
By the way, we are still paying pretty much the same price for quad core CPUs, and that is absolutely pathetic. The manufacturing costs must be insanely low.

2009 Lynnfield 45 nm 290 mm2 - 196 $
2014 Haswell 22 nm 177 mm2 - 182 $ (242 $ for a model that allows overclocking, sick)

We should have had six-core CPUs for 200 $ by now.



Die size is not a problem for GPUs. High-end GPUs are usually between 400 and 600 mm2, so heat dissipation is not a problem.
Whenever they change the node, they pack a lot more transistors into the chips, making them much faster while keeping a similar die size. Intel do not do that anymore, they are reducing the die size without increasing performance or clock speeds.
Posted on Reply
#122
xfia
thanks.. was hoping someone could bridge the difference if I threw it in there.

is it possible they cant fit anymore transistors since its smaller?

where would I look for some more inside info on chip engineering?
Posted on Reply
#123
Sony Xperia S
Harry LloydThe problem exists because the die size gets smaller and smaller, as they are not increasing the number of cores. Lynnfield was 290 mm2, Sandy Bridge was 216 mm2, Ivy Bridge was 160 mm2. With Broadwell this will probably get below 120 mm2.
By the way, we are still paying pretty much the same price for quad core CPUs, and that is absolutely pathetic. The manufacturing costs must be insanely low.

2009 Lynnfield 45 nm 290 mm2 - 196 $
2014 Haswell 22 nm 177 mm2 - 182 $ (242 $ for a model that allows overclocking, sick)

We should have had six-core CPUs for 200 $ by now.



Die size is not a problem for GPUs. High-end GPUs are usually between 400 and 600 mm2, so heat dissipation is not a problem.
Whenever they change the node, they pack a lot more transistors into the chips, making them much faster while keeping a similar die size. Intel do not do that anymore, they are reducing the die size without increasing performance or clock speeds.
Yeah, we discovered the hot water.... :)

By the way.... you can counter this trend simply by ignoring the existence of Intel.

Just be smarter and buy all AMD. ;)

Posted on Reply
#124
the54thvoid
Intoxicated Moderator
Sony Xperia SYeah, we discovered the hot water.... :)

By the way.... you can counter this trend simply by ignoring the existence of Intel.

Just be smarter and buy all AMD. ;)

No thanks, moar power required. Zen is too far away. Might mix Intel and AMD when 390X comes out though.
Posted on Reply
#125
THU31
If AMD finally caught up, maybe Intel would have to do something. Right now they are focusing on mobile, while desktops just sit there doing nothing, because they have had the most powerful CPUs since 2006.
I miss the days where AMD CPUs were better for gaming (Duron, Athlon XP, Athlon 64), while being cheaper as well.

At least the GPUs are ok, though power hungry, but they are not really AMD, they just bought what was good.
Posted on Reply
Add your own comment
May 9th, 2024 17:36 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts