Wednesday, May 8th 2013

AMD's Answer to GeForce GTX 700 Series: Volcanic Islands

GPU buyers can breathe a huge sigh of relief that AMD isn't fixated with next-generation game consoles, and that its late-2013 launch of its next GPU generation is with good reason. The company is building a new GPU micro-architecture from the ground up. Codenamed "Volcanic Islands," with members codenamed after famous islands along the Pacific Ring of Fire, the new GPU family sees AMD rearranging component-hierarchy within the GPU, in a big way.

Over the past three GPU generations that used VLIW5, VLIW4, and Graphics CoreNext SIMD architectures, the component hierarchy was essentially untouched. According to an early block-diagram of one of the GPUs in the series, codenamed "Hawaii," AMD will designate parallel and serial computing units. Serial cores based on either of the two architectures AMD is licensed to use (x86 and ARM), could handle part of the graphics processing load. The stream processors of today make up the GPU's parallel processing machinery.

We can't make out text in the rather blurry block-diagram, but are rather convinced that if it's authentic, then AMD is making some big changes. Another reason for AMD's delay could be silicon fab process. "Tahiti" as implemented on Radeon HD 7970 GHz Edition, already poses high thermal envelope. AMD doesn't want the 28 nm process to restrict its next-generation architecture development, and is holding out till the 20 nm process is in place at TSMC. The fab set Q4 as its tentative bulk manufacturing date for the process.

The source that leaked the block-diagram also posted specifications of the chip that's codenamed "Hawaii," which appears to be the flagship part.
  • 20 nm silicon fab process
  • 4096 stream processors
  • 16 serial processor cores
  • 4 geometry engines
  • 256 TMUs
  • 64 ROPs
  • 512-bit GDDR5 memory interface
Source: ChipHell. Many Thanks to SIGSEGV for the tip!
Add your own comment

145 Comments on AMD's Answer to GeForce GTX 700 Series: Volcanic Islands

#1
Apocolypse007
by: Vinska
It's nice and all, but too bad this was already posted 26 posts ago (post #74).
Here, have a cookie.
I'm sorry. I didn't notice. Like I said I tend to only cruise the news articles here these days and didn't see all 100+ replies (only the first page or so of replies ends up on the news article).
Posted on Reply
#2
NeoXF
by: Vinska
It's nice and all, but too bad this was already posted 26 posts ago (post #74).
Here, have a cookie.
Cool story bro... and you're trolling him... why?
Posted on Reply
#3
Hilux SSRG
"The fab set Q4 as its tentative bulk manufacturing date for the process."

So consumers will be able to purchase limited quantities in Q4 2013 and open availability in Q1 2014? If so, then what will AMD launch between now and then?
Posted on Reply
#4
HumanSmoke
by: Casecutter
Go blow that smoke somewhere else. When you look at power usage in actual gaming and average the usage over those the GTX680 vs the 7970 is only using 3-4% more watts and was more often 5-7% off the GTX 680 performance. Now true the Ghz versions on synthetic tests went higher on power usage (not much fun playing synthetic tests), but looking at gaming the GHz version actual came in lower the a GTX 680 by 7 watts. In real world gaming a Ghz will in no way change what you pay in your power bill, it might by these dare I say cost you less...
http://www.hardocp.com/article/2012/03/22/nvidia_kepler_gpu_geforce_gtx_680_video_card_review/13
http://www.hardocp.com/article/2012/10/30/xfx_double_d_hd_7970_ghz_edition_video_card_review/10
Well, something must have happened between those old tests and the newer ones at [H]OCP...


I don't doubt that the power usage cost is negligible for the likely user bases of the cards, but I don't see the 7970GE being more frugal in power consumption than the 680 either...except in a very small minority of games.

Anyhow, I'd say all bets are off with any new architectures on new processes. I'm pretty sure that no one would have predicted the perf-per-watt difference between Fermi and Kepler, so there is no reason why the same can't be said for SI > VI
Posted on Reply
#5
theoneandonlymrk
by: HumanSmoke
Well, something must have happened between those old tests and the newer ones at [H]OCP...
http://www.hardocp.com/images/articles/1361915444mQwWwGD0oO_9_1.png

I don't doubt that the power usage cost is negligible for the likely user bases of the cards, but I don't see the 7970GE being more frugal in power consumption than the 680 either...except in a very small minority of games.

Anyhow, I'd say all bets are off with any new architectures on new processes. I'm pretty sure that no one would have predicted the perf-per-watt difference between Fermi and Kepler, so there is no reason why the same can't be said for SI > VI
The slight efficiency your claiming for kepler over fermi whilst real is in part due to the lack of compute power and its advanced clock and power gateing but isn't that special if you understand the tech and the way nvidia aimed gk104 so specifically at gaming and isn't out of amds reach at all.
Posted on Reply
#6
Mathragh
imho, a big part of keplers efficiency is the result of their new powertuning, which can keep the voltage a lot closer to the optimum than previous generations, resulting in a lower voltage needed for specific clocks, and thus lower powerconsumption and heat.
Posted on Reply
#7
HumanSmoke
by: theoneandonlymrk
The slight efficiency your claiming for kepler over fermi whilst real is in part due to the lack of compute power and its advanced clock and power gateing but isn't that special if you understand the tech and the way nvidia aimed gk104 so specifically at gaming and isn't out of amds reach at all.
Really?
I was comparing apples to apples, GF100/GF110 to GK 110.

GK 110 is Kepler µarchitecture isn't it ?
Posted on Reply
#8
cadaveca
My name is Dave
by: HumanSmoke
Well, something must have happened between those old tests and the newer ones at [H]OCP...
http://www.hardocp.com/images/articles/1361915444mQwWwGD0oO_9_1.png

I don't doubt that the power usage cost is negligible for the likely user bases of the cards, but I don't see the 7970GE being more frugal in power consumption than the 680 either...except in a very small minority of games.

Anyhow, I'd say all bets are off with any new architectures on new processes. I'm pretty sure that no one would have predicted the perf-per-watt difference between Fermi and Kepler, so there is no reason why the same can't be said for SI > VI
My ASUS 7970 Matrix @ 1200/1650 pulls less than 250W @ gaming. In Furmark, those numbers might be possible, but otherwise, those seem incredibly high to me. My 7950's don't pull over 200W, too. More like 150W.

I also doubt their full system, minus VGA pulled only 63W as they report. Just saying. Feel free to check ANY of my motherboard reviews to find more realistic numbers for system power consumption. I'd almost say that [H]ardOCP's reviewer there didn't test anything, really.

Test setup is listed as a 2500k @ 4.8 GHz. Average power consumption of such a CPU is around 150W in prime95, and about 90W in gaming. Impossible to be 63W only for CPU, fans, drives. Just saying. Their numbers are 1000% false. I'd minus at least 75W from each of those listed numbers. Even the NVidia numbers are suspect.
Posted on Reply
#9
theoneandonlymrk
by: HumanSmoke
Really?
I was comparing apples to apples, GF100/GF110 to GK 110.
GK 110 is Kepler µarchitecture isn't it ?
Yea yeah gk110s power efficiency is way beyond amd not imho am I missing your point here ?.
Posted on Reply
#10
HumanSmoke
by: theoneandonlymrk
Yea yeah gk110s power efficiency is way beyond amd not imho am I missing your point here ?.
Yeah, you probably are. What I said earlier was:
I'm pretty sure that no one would have predicted the perf-per-watt difference between Fermi and Kepler, so there is no reason why the same can't be said for SI > VI
So, if Nvidia can improve on efficiency between one µarch and another, then the same holds true for AMD with SI (Southern Islands) to VI (Volcanic Islands). I made no comparison between the two vendors regarding what might/could eventuate. The only comparison was in the earlier part of the post- pointing out to Casecutter the vagaries of what passes for "power usage under load" even within tests carries out by the same site.
If I'm comparing µarch to µarch, then I would generally look to compare the analogue of each architectures GPUs. GF100/GF110 and GK110 are both similar in die size, placement within the product stack hierarchy, and feature set.
by: cadaveca
I also doubt their full system, minus VGA pulled only 63W as they report. Just saying. Feel free to check ANY of my motherboard reviews to find more realistic numbers for system power consumption. I'd almost say that [H]ardOCP's reviewer there didn't test anything, really.
I don't doubt that the [H]ardOCP figures aren't definitive either - they really can't be with the variance between tests conducted only a few months apart. I only used the [H]ardOCP result because Casecutter was using the same source for his initial argument.
Posted on Reply
#11
NeoXF
by: theoneandonlymrk
Yea yeah gk110s power efficiency is way beyond amd not imho am I missing your point here ?.
GK110 w/ 1 CU shut down still consumes at least 20W more than a 7970GHz and gets trashed by it in anything compute. And isn't that much faster in a good number of games, heck, slower in DiRT Showdown, Tomb Raider (under some circumstances) and some other titles.

...and that's considering they had more than a year to get it right (LMAO @ the people saying they didn't release it because GK104 was "good enough").

nVidia might have marginally better power consumption in the high end, but that is all, nothing special about it. Looking at the lower end chips, AMD actually has a wider advantage over nVidia the other way around at higher TDP cards.
Posted on Reply
#12
15th Warlock
by: NeoXF
GK110 w/ 1 CU shut down still consumes at least 20W more than a 7970GHz and gets trashed by it in anything compute. And isn't that much faster in a good number of games, heck, slower in DiRT Showdown, Tomb Raider (under some circumstances) and some other titles.

...and that's considering they had more than a year to get it right (LMAO @ the people saying they didn't release it because GK104 was "good enough").

nVidia might have marginally better power consumption in the high end, but that is all, nothing special about it. Looking at the lower end chips, AMD actually has a wider advantage over nVidia the other way around at higher TDP cards.
Why persist in spreading misinformation? This is from a previous thread about the GTX780:

From W1zzard's own GTX Titan review you can find in the TPU website:

Power consumption:












7970 GHz beat by Titan in terms of power consumption efficiency in every single scenario

Relative performance (average of every single 3D benchmark on every resolution):











GTX Titan beats the 7970GHz in every single resolution, now for Tomb Raider, this from W1zzard's review for the 7990:











The GTX Titan is faster than the 7970GHz in every resolution in that particular game, you may counter the 7990 is faster (and it is) but that's not even the point; dunno about DiRT showdown, but if what you say is true (W1zzard doesn't even test cards using that game) then it's probably the only scenario were the 7970GHz beats the Titan...

EDIT: Oh wait, I found these benchmarks using DiRT Showdown at Anand's:









Only in one scenario the 7970 "beats" Titan (if you call 0.9 FPS beating)

EDIT 2: as for the 7970 "trashing" Titan in compute performance, the theoretical max double precision performance (FP64) for 7970 is 1.08TFLOPs whereas Titan's is 1.3TFLOPs, but don't take it from me, this is (once again) from Anandtech, an analysis of Titan's compute performance by Rahul Garg, a Ph D. specializing in the field of parallel computing and GPGPU technology:













Out of all compute tests performed, only in SystemCompute benchmark Titan is beat by 7970GHz, in all other benchmarks Titan leaves 7970 in the dust... I exactly wouldn't call that "trashing"
Posted on Reply
#13
NeoXF
by: 15th Warlock


[snip]
I don't really care for your cherry-picked reviews TBH...

Also, I do want to remind everyone that TPH does not hold the absolute truth in regards to GPU reviews, you know?
Posted on Reply
#14
Vinska
Time for some lulz.
From the very same W1zz's review:

わはは~!


わははは~!
Posted on Reply
#15
15th Warlock
by: NeoXF
I don't really care for your cherry-picked reviews TBH...

Also, I do want to remind everyone that TPH does not hold the absolute truth in regards to GPU reviews, you know?
Cherry picked? This is the TPU forums you're posting at, what more appropriate than W1zzard's own reviews?

Not only that, but every scenario presented completely contradicts the facts mentioned by you, I'm not cherry picking anything, I'm actually posting every single test result, and you mention TR and DiRT... now I'm the one cherry picking?

You know, it doesn't really matter, if even showing you all the results (including studies from a Ph.D no less) won't convince you, then nothing will, if that's how you feel about this card in particular, you're entitled to your opinions...

Moving on...

EDIT: Just saw Vinska's reply, and I'm the one cherry picking, right...? I presented the condensed results for every single resolution in every single game... but this can drag on forever I see, it doesn't really matter, you guys win, OK ;)

Peace :)
Posted on Reply
#16
Vinska
by: 15th Warlock
EDIT: Just saw Vinska's reply, and I'm the one cherry picking, right...? I presented the condensed results for every single resolution in every single game... but this can drag on forever I see, it doesn't really matter, you guys win, OK ;)

Peace :)
yeah, it seems like cherry picking, but my point actually was:
Take these reviews with a HUGE grain of salt.

If You take a better look at W1zz's review, on Sleeping Dogs, the 7970 [GE] had almost twice [!] the fps on 5760x1080 when compared to 2560x1600. And on 1920x1200 it was slightly behind 5760x1080.
Similar situation with AC3 - on 5760x1080 it run significantly faster compared to 2560x1600 and 1920x1200 had pretty much the same framerates as 5760x1080

If that doesn't spell out the phrase "something is fishy with this review", then I don't know what would.

EDIT: I pointed out these things in the review discussion thread (there was a similar thing with the 7990, too). But no one seemed to care at all. Yet, I would LOVE to get an explanation or even a guess to WTH is wrong here (as something obviously is).
Posted on Reply
#17
ne6togadno
ok fanboy wars will never end so cant we finish with dick size contest and go back to topic.
what i see on cleared pictures (thx apocolypes) looks more like pelidriver based apu (may be for next gen console) then discrete gpu. was exciting and the begging but at closer look it is more likely fake news. :(
Posted on Reply
#18
Aquinus
Resident Wat-man
by: ne6togadno
ok fanboy wars will never end so cant we finsh with dick size contest and go back to topic.
Disagreements and conversation about said topic does not qualify as fanaticism so I recommend not clumping everyone together and calling them "fan boys" when you're just adding to the noise by saying this.

I think we can all agree that the Titan is a powerful card at the cost of some extra moollah where the 7970 provides some less performance for considerably less moollah. Weather or not the 700-series cards will be more like Titan or not, we don't know, but what I will say is that regardless of what NVidia has up their sleeves, AMD is working on something else well.

I think everyone should calm down and acknowledge that both NVidia and AMD are both two very good companies that produce quality hardware and if you disagree with me then maybe you're being a fanatic and I'll challenge you to design a GPU that does better if people are going to continue bashing on people who are doing things that most here can probably only dream of.
Posted on Reply
#19
NeoXF
by: 15th Warlock
(including studies from a Ph.D no less)
OK, now I know you are joking... How the Hell is that relevant to anything ever discussed here... What, Doctors or highly educated or top positioned people can't be biased, wrong or just simply... mistaken? If anything, they are more prone to ego mistakes and corruption. But then again, that's some pure generalization to make a point right there.

I can find any number of reviews where we can see the TITAN hover at 35W or more over the 7970GHz as well as benches showing it being beaten by a bunch of frames in DiRT:S, Tomb Raider and probably some other not-so-known titles, as well as having breathing down it's neck or tying by the Radeon in Sleeping Dogs, Far Cry 3: Blood Dragon, Metro 2033, AvP 2010, Sniper Elite V2, Max Payne 3 and some games at 4K... As for compute... don't get me started.
And so could you very probably (heck, you just did)... so I don't really care, I just hate praising an overpriced piece of late hardware for things that aren't even true.


by: Aquinus
Disagreements and conversation about said topic does not qualify as fanaticism so I recommend not clumping everyone together and calling them "fan boys" when you're just adding to the noise by saying this.
What I was about to reply to him. LOL
Thanks.


Edit: So more on-topic. Like GF 7900GTX to 8800GTX (DX 9.0c to DX10) and Radeon HD 4870 to Radeon HD 5870 (DX10 to DX11) I see this Volcanic Islands card as another huge jump in performance... but related to the jump from HD to UHD more so than anything else, like API upgrade, because let's face it, none of today's card cut it for 3840x2160 gaming (not that it's here yet anyway)... I dislike multi-monitor setups so much that multi-GPU and subsequent issues with them are a none-issue for me from the get-go.
Posted on Reply
#20
d1nky
did you see the review with the 4k resolutions. on a single card set up the 7970 performed quite well, but titan did come out top. and tbh like people say before the hardware Is performing well on both sides and its up to the software to catch up.

and is that volcanic islands diagram real?
Posted on Reply
#21
FrustratedGarrett
by: 15th Warlock


From W1zzard's own GTX Titan review you can find in the TPU website:

Power consumption:
http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_Titan/images/power_multimon.gif

http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_Titan/images/power_average.gif

7970 GHz beat by Titan in terms of power consumption efficiency in every single scenario
Well, the power consumption figures from tech report show a different picture:

The 7970GHz edition consumes less than 10 watts more than the GTX680 on full load , but it consumes less power than the 680 on idle and up to 11 watts less when the display is off, and that's makes it more power efficient than the 680.


by: 15th Warlock


As for the 7970 "trashing" Titan in compute performance, the theoretical max double precision performance (FP64) for 7970 is 1.08TFLOPs whereas Titan's is 1.3TFLOPs, but don't take it from me, this is (once again) from Anandtech, an analysis of Titan's compute performance by Rahul Garg, a Ph D. specializing in the field of parallel computing and GPGPU technology:

Out of all compute tests performed, only in SystemCompute benchmark Titan is beat by 7970GHz, in all other benchmarks Titan leaves 7970 in the dust... I exactly wouldn't call that "trashing"
I wouldn't trust anandtech, and I certainly think that they are biased in favor of Intel and against AMD.

In the compute tests from toms hardware and techreport or even Hexus, you get a completely different picture. The 7970 does trash even the dual gpu 690 and blow it out of the water when it comes to shader performance in in GPGPU.
Posted on Reply
#22
ne6togadno
by: Aquinus
Disagreements and conversation about said topic does not qualify as fanaticism so I recommend not clumping everyone together and calling them "fan boys" when you're just adding to the noise by saying this.
so you are telling that titan vs 7970 fits nice to "AMD's Answer to GeForce GTX 700 Series: Volcanic Islands ".
english isnt my native and i dont pretend to perfectly understand it but obviously i understand it quite worse then i thought. seams noone want (or can) comment what is shown on the picture so lets share more "test results" that favor "my grafic card".
anyway discusion went too far away from that to be useful. gl in diagram comparison
truth i out there
Posted on Reply
#23
cadaveca
My name is Dave
by: Vinska
Similar situation with AC3 - on 5760x1080 it run significantly faster compared to 2560x1600 and 1920x1200 had pretty much the same framerates as 5760x1080

If that doesn't spell out the phrase "something is fishy with this review", then I don't know what would.

EDIT: I pointed out these things in the review discussion thread (there was a similar thing with the 7990, too). But no one seemed to care at all. Yet, I would LOVE to get an explanation or even a guess to WTH is wrong here (as something obviously is).
Only AMD can answer why this is the case. I tested myself and get the same results as W1zz does. AMD said their memory management is broken/sub-optimal, and that's how it's noticed...Eyefinity.

Also, Eyefinity doesn't actually draw every single pixel on the side monitors in the same ratio/aspect as on the primary monitor, due to fish-eye effect. So although the resolution of the monitors is 5670x1080/1200, the workload may not actually be that many pixels, depending on app.


Do keep in mind that W1zz used to write ATITool, and writes other AMD-specific clocking apps. Best I can tell, he really doesn't care who is faster, and has no agenda...notice we don't have ads here except on the front page. TPU is not a site driven by the opportunity to make money doing reviews...we all just provide the numbers, and you decide who you like based on the results. Because anyone can replicate our tests, in every review. For me, I actually hope you do test and check our numbers... I know you'll find you get the same results.
Posted on Reply
#24
15th Warlock
by: Vinska
yeah, it seems like cherry picking, but my point actually was:
Take these reviews with a HUGE grain of salt.

If You take a better look at W1zz's review, on Sleeping Dogs, the 7970 [GE] had almost twice [!] the fps on 5760x1080 when compared to 2560x1600. And on 1920x1200 it was slightly behind 5760x1080.
Similar situation with AC3 - on 5760x1080 it run significantly faster compared to 2560x1600 and 1920x1200 had pretty much the same framerates as 5760x1080

If that doesn't spell out the phrase "something is fishy with this review", then I don't know what would.

EDIT: I pointed out these things in the review discussion thread (there was a similar thing with the 7990, too). But no one seemed to care at all. Yet, I would LOVE to get an explanation or even a guess to WTH is wrong here (as something obviously is).
That's interesting, have you tried sending a PM to W1zzard with your findings? I'm sure he'll appreciate it and change the charts accordingly :)

by: Aquinus
Disagreements and conversation about said topic does not qualify as fanaticism so I recommend not clumping everyone together and calling them "fan boys" when you're just adding to the noise by saying this.
Couldn't agree more, thank you very much for your post. In the almost 9 years I've been visiting this forum, this is the first time I've been labeled a fanboy. Truth is I had never before made a post with so many charts to try and get my point accross, and all for naught LOL :toast:


by: NeoXF
OK, now I know you are joking... How the Hell is that relevant to anything ever discussed here... What, Doctors or highly educated or top positioned people can't be biased, wrong or just simply... mistaken? If anything, they are more prone to ego mistakes and corruption. But then again, that's some pure generalization to make a point right there.
The guy's field of study is parallel computing and GPGPU technology, not much room for bias in that field, then again, like I said before, you're entitled to your opinion, and you have already mentioned your mistrust of doctors in general - and science by extension (?!) - so no point in trying to convince you otherwise, right?

I guess we can all agree that a this point speculating on the performance of graphic cards that are yet to be released is pointless, as there no evidence whatsoever to back these facts, all we can do is wait and see, no point in fighting to try and show the world who has the biggest e-peen :p

It's all good, like I said, this could drag on forever, perhaps it's better to move on, for the sake of this thread :)
Posted on Reply
#25
Casecutter
by: HumanSmoke
Well, something must have happened between those old tests and the newer ones at [H]OCP...
Yeah, if you read what you posted [H] say they're using "real gaming and recorded the highest value in each"... Not an average of what it took to complete that section! Sure a 7970 might peak for a millisecond, is that what they mean as the "highest" value?

While now [H] doen't tell us the games used, but hopefully figure the 5 [H] used in that new review, which are different that the earlier 5. [H] drop Batman and Witcher (use 11% more watts than the 7970GHz) which as move the data against the GHz Edition. Also, in most of the titles the 7970GHz provide more Fps verse a 680, so we'd logial anticipate more power usage. Even Sleeping Dog [H] had to us the lower 1920x Res to have more Fps,

Going back to an average of the what a card require to complete the run-throughs of each game, and them take those five games add them together and divide by 5 is more real world anyway you slice it.

by: 15th Warlock
Why persist in spreading misinformation? This is from a previous thread about the GTX780:

From W1zzard's own GTX Titan review you can find in the TPU website:
That's only one game Crysis 2 on a specific run-through. Sure it looks good by that on one data point, but hardly is telling the whole story, when various titles have their average power usage and over a long period of playing each. Sure if all you play is "Crysis 2 at 1920x1200, Extreme profile, representing a typical gaming power draw. Highest single reading during the test" and then limited your play to that one small run-through each time then you can abide with that one point of data.
Posted on Reply
Add your own comment