Monday, February 9th 2015

Radeon R9 380X Based on "Grenada," a Refined "Hawaii"

AMD's upcoming Radeon R9 380X and R9 380 graphics cards, with which it wants to immediately address the GTX 980 and GTX 970, will be based on a "new" silicon codenamed "Grenada." Built on the 28 nm silicon fab process, Grenada will be a refined variant of "Hawaii," much in the same way as "Curacao" was of "Pitcairn," in the previous generation.

The Grenada silicon will have the same specs as Hawaii - 2,816 GCN stream processors, 176 TMUs, 64 ROPs, and a 512-bit wide GDDR5 memory interface, holding 4 GB memory. Refinements in the silicon over Hawaii could allow AMD to increase clock speeds, to outperform the GTX 980 and GTX 970. We don't expect the chip to be any more energy efficient at its final clocks, than Hawaii. AMD's design focus appears to be performance. AMD could save itself the embarrassment of a loud reference design cooler, by throwing the chip up for quiet custom-design cooling solutions from AIB (add-in board) partners from day-one.
In other news, the "Tonga" silicon, which made its debut with the performance-segment Radeon R9 285, could form the foundation of Radeon R9 370 series, consisting of the R9 370X, and the R9 370. Tonga physically features 2,048 stream processors based on the more advanced GCN 1.3 architecture, 128 TMUs, 32 ROPs, and a 384-bit wide GDDR5 memory interface. Both the R9 370 and R9 370X could feature 3 GB of standard memory amount.

The only truly new silicon with the R9 300 series, is "Fiji." This chip will be designed to drive AMD's high-end single- and dual-GPU graphics cards, and will be built to compete with the GM200 silicon from NVIDIA, and the GeForce GTX TITAN-X it will debut with. This chip features 4,096 stream processors based on the GCN 1.3 architecture - double that of "Tonga," 256 TMUs, 128 ROPs, and a 1024-bit wide HBM memory interface, offering 640 GB/s of memory bandwidth. 4 GB could be the standard memory amount. The three cards AMD will carve out of this silicon, are the R9 390, the R9 390X, and the R9 390X2.
Source: 3DCenter.org
Add your own comment

156 Comments on Radeon R9 380X Based on "Grenada," a Refined "Hawaii"

#26
RejZoR
BreitEnough bandwidth to the VRAM should help in scaling performance better to higher resolutions and AMD did always scale better with higher resolution. Question is if the 4GB VRAM will hold them back. I would've liked to see 8GB standard at least on the Fiji part.
I don't get the double standards imposed here. For NVIDIA, everyone is saying you don't even need 4GB, 4GB si plenty, who cares if it only has 3,5GB of high speed memory etc. And here we have AMD cards with exactly the same amount of (uncrippled) HBM stacked memory and everyone is questioning "would 4GB be enough". Confused much...
Posted on Reply
#27
mroofie
RejZoRI don't get the double standards imposed here. For NVIDIA, everyone is saying you don't even need 4GB, 4GB si plenty, who cares if it only has 3,5GB of high speed memory etc. And here we have AMD cards with exactly the same amount of (uncrippled) HBM stacked memory and everyone is questioning "would 4GB be enough". Confused much...
Well double standards exist especially within the amd clan as well so im not sure what you're tying to imply here ? :0

"Remember 2 wrongs don't make a right" GTA IV WKTT :p
BreitThose 4GB NVIDIA cards came out a year ago. At that time 4GB might have been enough. But now we are approaching the 4K era and as such we need more VRAM. This is logical as the cards will be faster too, enabling gaming at 4K.
obviously :D
Posted on Reply
#28
Breit
RejZoRI don't get the double standards imposed here. For NVIDIA, everyone is saying you don't even need 4GB, 4GB si plenty, who cares if it only has 3,5GB of high speed memory etc. And here we have AMD cards with exactly the same amount of (uncrippled) HBM stacked memory and everyone is questioning "would 4GB be enough". Confused much...
Those 4GB NVIDIA cards came out a year ago. At that time 4GB might have been enough. But now we are approaching the 4K era and as such we need more VRAM. This is logical as the cards will be faster too, enabling gaming at 4K.
Posted on Reply
#29
Sony Xperia S
BreitThose 4GB NVIDIA cards came out a year ago. At that time 4GB might have been enough. But now we are approaching the 4K era and as such we need more VRAM. This is logical as the cards will be faster too, enabling gaming at 4K.
I think that either they will find a way to use these 4 GB or this will simply be an artificial marketing bottleneck in order to sell next-generation cards better (perhaps the similar "solution" as what nvidia did with the crippled 970 :rolleyes: )

Don't forget that these cards will have a short lifespan and we are expecting soon the shrunk GPUs.
Posted on Reply
#30
RejZoR
GTX 970 and 980 did not come out 1 year ago...
Posted on Reply
#31
xfia
there is nothing wrong with gaming at 4k with the gpu's that are out now. its just some drpy unoptimized cpu bound games that make people all wtf.

shrinking does also mean less space to work with and not any guaranteed performance gains. any gpu that is full dx12 will be some good stuff for a pretty long time.
Posted on Reply
#32
the54thvoid
Intoxicated Moderator
RejZoRI don't get the double standards imposed here. For NVIDIA, everyone is saying you don't even need 4GB, 4GB si plenty, who cares if it only has 3,5GB of high speed memory etc. And here we have AMD cards with exactly the same amount of (uncrippled) HBM stacked memory and everyone is questioning "would 4GB be enough". Confused much...
Don't try comparing the arguments over a 970's memory shenanigans to AMD's next uber chip (Fiji). I'm not clued up on it but many say HBM only caters for 4GB memory allowance (for now?...). The 970 is the cheaper Maxwell performance part whereas 390X will be the single gpu top tier.

And yes, those that bought cards with 4GB (or not as the 970 is) would have figured that into their choice. If 390X is to be AMD's next gen top tier card, you would hope it would have more as they have already seen fit (AIB's) to release a 8GB 290X with albeit small rewards at 4k.

IMO, I don't know if we need >4GB for gaming purpose except on poorly coded things (look at the recent CoD for bad memory hogging, or Titanfall IIRC). But if we do need >4GB in the next year or so, I'm pretty sure there will be developments to allow higher memory usage on the AMD chips.

So, to be plain - 4GB won't be an immediate worry and I'm sure it will be addressed when needed.
Posted on Reply
#33
RejZoR
Seeing how I can play stuff with everything maxed out on a 3GB HD7950, I feel like 4GB is plenty for next 2-3 years (I know not all play at "only" 1080p but it's kinda norm these days). Unless if someone will make a drastic jump in quality (and compute demand). Unreal Engine 4 or Frostbite 4 maybe...
Posted on Reply
#34
GreiverBlade
well it's not so bad if the 380/380X are 290/290X rebrand with refinement ... maybe my Kryografics Hawaii will be compatible with it :roll:

will wait the 390 tho ;)
Posted on Reply
#35
RejZoR
Yeah, the R9-280X was indeed slightly faster than HD7970, despite being seemingly the same GPU.
Posted on Reply
#36
xfia
I like how you said seemingly there.. there is a bit more going on than re branding.
Posted on Reply
#37
Breit
RejZoRSeeing how I can play stuff with everything maxed out on a 3GB HD7950, I feel like 4GB is plenty for next 2-3 years (I know not all play at "only" 1080p but it's kinda norm these days). Unless if someone will make a drastic jump in quality (and compute demand). Unreal Engine 4 or Frostbite 4 maybe...
Maybe game developers know that the majority of cards only have <=4GB VRAM and as such tune their games around that? With more VRAM to work with, they may implement features that require more memory and hence can not be played with "everything enabled ultra uber whatever" settings...
Posted on Reply
#38
xfia


I could see how it would be possible for shrinking gpu's to show many of the same problems. they are loving smaller lith for for mobile devices but perhaps there is bigger hurtles on the high end gpu side of things.
Posted on Reply
#39
Ferrum Master
RejZoRYeah, the R9-280X was indeed slightly faster than HD7970, despite being seemingly the same GPU.
Flash R280X bios and enjoy the same... I did it with my card... put a GB 280X bios as the PCB and VRM is completely the same, just RAM timings and clocking table, different power limit.
Posted on Reply
#40
Serpent of Darkness
megamanxtreme380 and 380X will be R9 290/290X?
Rebrand for the lose?
For my FX-6300 I wasn't looking for anything more than performance of GTX 970, but everything is a mixed bag.
Both brands do it. AMD 7970 was apart of the AMD 7000 generation. AMD 7990 was the king of that generation. R9-280 is the rebrand of a 7970 with better frame time variance performance and tweaks. R9-380 and 380x is the rebrand of the R9-290 and 290x. Since the R9-390 and 390x are possibly having a nice TDP drop: Almost twice the Streaming processors with an idle TDP of a 290x, is what rumor-mills are hinting about in the R9-390, R9-380 and 380x will have a TDP below the current generation. R9-390 at idle, is hinted to have a TDP around 300 watts. So if R9-290x has a TDP around 300 watts on idle, R9-380 and 380x will have a TDP less than the R9-290. Basically, the new 380s will consume less power, and have the minor improvements it's been missing in the 290. R9-390 has the new GCN and all the premium stuff from the AMD camp.

For Nvidia, GTX 760 is a GTX 680 rebrand. GTX 960 is a GTX 780 rebrand. Since the economy is the way it is, effectively selling off leftover stock is not what it use to be for both sides. So NVidia and AMD play the game of rebranding old chips into the next generation of graphic cards to get people to buy them instead of a high-end card due to budget constraints. A chip not sold, or a graphic card not sold, is money lost in the long run.
RejZoRI don't get the double standards imposed here. For NVIDIA, everyone is saying you don't even need 4GB, 4GB si plenty, who cares if it only has 3,5GB of high speed memory etc. And here we have AMD cards with exactly the same amount of (uncrippled) HBM stacked memory and everyone is questioning "would 4GB be enough". Confused much...
I think it's been stigmatized into every high-end, gaming enthusiasts' mind, who knows a thing or two about AMD and NVidia Graphic Cards, that NVidia is king at 1080p, and anything above that, AMD dominate. Dominates barely... This could explain the expectations set by the consumers, and for their response: "The 390 only has a 4GB Framebuffer, isn't that a little lacking?" In addition, if you look at the past few generations of graphic cards, NVidia comes out with 2GBs, AMD comes out with 3GBs in the same generation. This has always been the trend. AMD will always have 1 GB more worth of memory than the same tier and generation card as NVidia's Card. On the Nvidia side, they try to make up for it by having a higher memory bandwidth. This is there way of making up for the lost in performance. Personally, I don't think 5GBs or more is needed, and if they really need the extra VRam, consumers could have just gotten a Sapphire R9-290x with the 8GB Framebuffer, or a Titan-black. Now, speaking of the future, there's no doubt in my mind that some variant of a R9-390x from one of the vendors, a non-reference card, will have 8 GBs or more in the not to distant future. In addition, there's really no need to go that high unless you are going surround or 4K. I bet the number of consumers who are at that level, or being in the position to own a 4K tv or surround setup is anywhere from 1 to 10%. The remaining consumers will probably still hover around the 1080p resolution for the next 2 to 5 years. Only games that in theory, can go above 4GBs VRam, would be Star Citizens, and Skyrim with a heavy set of mods on a 1080p setup. Crysis 3, at full blast settings, could peak around 3.6 GB Vram. Maybe that Lord of the Rings, Mordor game on high textures can go past it on 1080p resolution.

edit: it's late, I edited my stuff, I don't give a shet if you are a grammar Nazi...
Posted on Reply
#41
Ferrum Master
Serpent of DarknessR9-280 is the rebrand of a 7970
7950
xfiabigger hurtles on the high end gpu side of things.
Apples and oranges... The GPU die cools off also with the PCB copper layer it self as it is soldered directly to it, and it ain't the first very hot card actually in history also and the die package of an GPU is HUGE as itself... And we cannot actually compare what problems Intel has with other TSMC problems...
Posted on Reply
#42
W1zzard
Ferrum MasterGrenada should have memory compresion just as Tonga has, GCN upgrade.
no, it's just a rebrand. no new memory compression, not gcn 1.3.
Posted on Reply
#43
Breit
Serpent of DarknessGTX 960 is a GTX 780 rebrand
This is just wrong. The GTX780 was based on a cut-down GK110 core (Kepler architecture) and the GTX960 is based on the fully-enabled GM206 core (Maxwell architecture). Don't confuse these two...
Posted on Reply
#44
Ferrum Master
W1zzardno, it's just a rebrand. no new memory compression, not gcn 1.3.
Ah crap... then hope it won't happen as 7970 the newer XTL die revision will actually clock worse, maybe a coincidence. And the Stock cooler... what it will be again... same hairdryer...
Posted on Reply
#45
Aquinus
Resident Wat-man
btarunrAMD could save itself the embarrassment of a loud reference design cooler, by throwing the chip up for quiet custom-design cooling solutions from AIB (add-in board) partners from day-one.
WTF is this? Hardly the attitude appropriate for a news article... This quote isn't news, it's speculation. I've found reference coolers to be some of the best coolers, sound aside. AMD needs to save itself some embarrassment by not screwing up and making sure that these GPUs don't under-perform. If the 390X doesn't compete, AMD has nothing with this release from what I gather. I seriously doubt the type of cooler on release will determine if the GPU is a contender or not.
Posted on Reply
#46
the54thvoid
Intoxicated Moderator
Serpent of DarknessFor Nvidia, GTX 760 is a GTX 680 rebrand
Yeah, that's wrong too. 760 was the 670 and the 770 was the 680. Both Kepler.

As Breit rightly says, GM is not GK.
Posted on Reply
#47
Ferrum Master
AquinusWTF is this? Hardly the attitude appropriate for a news article... This quote isn't news, it's speculation. I've found reference coolers to be some of the best coolers, sound aside. AMD needs to save itself some embarrassment by not screwing up and making sure that these GPUs don't under-perform. If the 390X doesn't compete, AMD has nothing with this release from what I gather. I seriously doubt the type of cooler on release will determine if the GPU is a contender or not.
Actually you both are exaggerating... 290x stock blower is a disaster and that's a fact, it's poorly made to begin with. But yes you are right, news article should not contain such bashing already, that's our task to do :D.

The cooler actually can do wonders... as clock really matters since turbo button was used on PC cases and the competition is so tight few 100Mhz can carve out the lead for sure, especially on minimum FPS.
Posted on Reply
#48
refillable
Wow, I was disappointed... I thought 380X is Fiji and Bermuda is 390X. Well I guess to put this in perspective is 380X will shoot straight at 970. 390X will compete with the future Titan II/980 Ti.

What could be disappointing is the efficiency. With these chips you will only be getting 285 like efficiency, which is no where near Maxwell. Heat should be maintained pretty well IMO. AIB coolers with double fans are going to keep the temperatures down.
Posted on Reply
#49
RejZoR
Who cares about efficiency really. If they can still manage noise levels, I don't really care. Efficiency mattered for miners, but you don't play games 24/7 or do you?
Posted on Reply
#50
Aquinus
Resident Wat-man
RejZoRWho cares about efficiency really. If they can still manage noise levels, I don't really care. Efficiency mattered for miners, but you don't play games 24/7 or do you?
I care that the 970 has a multi-monitor idle consumption of <5-watts and the 290 is closer to 55-watts. So, yes. People like me, who aren't gaming most of the time (but do still game regularly) but are using multiple monitor for productivity reasons, do care a little bit as power usage as it adds up over time. Is it a huge factor? No. Is it one worth considering? Sure.

Also, higher efficiency would mean lower temps or more overhead for higher clocks which is never a bad thing.
Posted on Reply
Add your own comment
May 10th, 2024 09:05 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts