Wednesday, May 8th 2013
AMD's Answer to GeForce GTX 700 Series: Volcanic Islands
GPU buyers can breathe a huge sigh of relief that AMD isn't fixated with next-generation game consoles, and that its late-2013 launch of its next GPU generation is with good reason. The company is building a new GPU micro-architecture from the ground up. Codenamed "Volcanic Islands," with members codenamed after famous islands along the Pacific Ring of Fire, the new GPU family sees AMD rearranging component-hierarchy within the GPU, in a big way.
Over the past three GPU generations that used VLIW5, VLIW4, and Graphics CoreNext SIMD architectures, the component hierarchy was essentially untouched. According to an early block-diagram of one of the GPUs in the series, codenamed "Hawaii," AMD will designate parallel and serial computing units. Serial cores based on either of the two architectures AMD is licensed to use (x86 and ARM), could handle part of the graphics processing load. The stream processors of today make up the GPU's parallel processing machinery.
We can't make out text in the rather blurry block-diagram, but are rather convinced that if it's authentic, then AMD is making some big changes. Another reason for AMD's delay could be silicon fab process. "Tahiti" as implemented on Radeon HD 7970 GHz Edition, already poses high thermal envelope. AMD doesn't want the 28 nm process to restrict its next-generation architecture development, and is holding out till the 20 nm process is in place at TSMC. The fab set Q4 as its tentative bulk manufacturing date for the process.
The source that leaked the block-diagram also posted specifications of the chip that's codenamed "Hawaii," which appears to be the flagship part.
Source:
ChipHell
Over the past three GPU generations that used VLIW5, VLIW4, and Graphics CoreNext SIMD architectures, the component hierarchy was essentially untouched. According to an early block-diagram of one of the GPUs in the series, codenamed "Hawaii," AMD will designate parallel and serial computing units. Serial cores based on either of the two architectures AMD is licensed to use (x86 and ARM), could handle part of the graphics processing load. The stream processors of today make up the GPU's parallel processing machinery.
We can't make out text in the rather blurry block-diagram, but are rather convinced that if it's authentic, then AMD is making some big changes. Another reason for AMD's delay could be silicon fab process. "Tahiti" as implemented on Radeon HD 7970 GHz Edition, already poses high thermal envelope. AMD doesn't want the 28 nm process to restrict its next-generation architecture development, and is holding out till the 20 nm process is in place at TSMC. The fab set Q4 as its tentative bulk manufacturing date for the process.
The source that leaked the block-diagram also posted specifications of the chip that's codenamed "Hawaii," which appears to be the flagship part.
- 20 nm silicon fab process
- 4096 stream processors
- 16 serial processor cores
- 4 geometry engines
- 256 TMUs
- 64 ROPs
- 512-bit GDDR5 memory interface
145 Comments on AMD's Answer to GeForce GTX 700 Series: Volcanic Islands
So consumers will be able to purchase limited quantities in Q4 2013 and open availability in Q1 2014? If so, then what will AMD launch between now and then?
I don't doubt that the power usage cost is negligible for the likely user bases of the cards, but I don't see the 7970GE being more frugal in power consumption than the 680 either...except in a very small minority of games.
Anyhow, I'd say all bets are off with any new architectures on new processes. I'm pretty sure that no one would have predicted the perf-per-watt difference between Fermi and Kepler, so there is no reason why the same can't be said for SI > VI
I was comparing apples to apples, GF100/GF110 to GK 110.
GK 110 is Kepler µarchitecture isn't it ?
I also doubt their full system, minus VGA pulled only 63W as they report. Just saying. Feel free to check ANY of my motherboard reviews to find more realistic numbers for system power consumption. I'd almost say that [H]ardOCP's reviewer there didn't test anything, really.
Test setup is listed as a 2500k @ 4.8 GHz. Average power consumption of such a CPU is around 150W in prime95, and about 90W in gaming. Impossible to be 63W only for CPU, fans, drives. Just saying. Their numbers are 1000% false. I'd minus at least 75W from each of those listed numbers. Even the NVidia numbers are suspect.
If I'm comparing µarch to µarch, then I would generally look to compare the analogue of each architectures GPUs. GF100/GF110 and GK110 are both similar in die size, placement within the product stack hierarchy, and feature set. I don't doubt that the [H]ardOCP figures aren't definitive either - they really can't be with the variance between tests conducted only a few months apart. I only used the [H]ardOCP result because Casecutter was using the same source for his initial argument.
...and that's considering they had more than a year to get it right (LMAO @ the people saying they didn't release it because GK104 was "good enough").
nVidia might have marginally better power consumption in the high end, but that is all, nothing special about it. Looking at the lower end chips, AMD actually has a wider advantage over nVidia the other way around at higher TDP cards.
From W1zzard's own GTX Titan review you can find in the TPU website:
Power consumption:
7970 GHz beat by Titan in terms of power consumption efficiency in every single scenario
Relative performance (average of every single 3D benchmark on every resolution):
GTX Titan beats the 7970GHz in every single resolution, now for Tomb Raider, this from W1zzard's review for the 7990:
The GTX Titan is faster than the 7970GHz in every resolution in that particular game, you may counter the 7990 is faster (and it is) but that's not even the point; dunno about DiRT showdown, but if what you say is true (W1zzard doesn't even test cards using that game) then it's probably the only scenario were the 7970GHz beats the Titan...
EDIT: Oh wait, I found these benchmarks using DiRT Showdown at Anand's:
Only in one scenario the 7970 "beats" Titan (if you call 0.9 FPS beating)
EDIT 2: as for the 7970 "trashing" Titan in compute performance, the theoretical max double precision performance (FP64) for 7970 is 1.08TFLOPs whereas Titan's is 1.3TFLOPs, but don't take it from me, this is (once again) from Anandtech, an analysis of Titan's compute performance by Rahul Garg, a Ph D. specializing in the field of parallel computing and GPGPU technology:
Out of all compute tests performed, only in SystemCompute benchmark Titan is beat by 7970GHz, in all other benchmarks Titan leaves 7970 in the dust... I exactly wouldn't call that "trashing"
Also, I do want to remind everyone that TPH does not hold the absolute truth in regards to GPU reviews, you know?
From the very same W1zz's review:
わはは~!
わははは~!
Not only that, but every scenario presented completely contradicts the facts mentioned by you, I'm not cherry picking anything, I'm actually posting every single test result, and you mention TR and DiRT... now I'm the one cherry picking?
You know, it doesn't really matter, if even showing you all the results (including studies from a Ph.D no less) won't convince you, then nothing will, if that's how you feel about this card in particular, you're entitled to your opinions...
Moving on...
EDIT: Just saw Vinska's reply, and I'm the one cherry picking, right...? I presented the condensed results for every single resolution in every single game... but this can drag on forever I see, it doesn't really matter, you guys win, OK ;)
Peace :)
Take these reviews with a HUGE grain of salt.
If You take a better look at W1zz's review, on Sleeping Dogs, the 7970 [GE] had almost twice [!] the fps on 5760x1080 when compared to 2560x1600. And on 1920x1200 it was slightly behind 5760x1080.
Similar situation with AC3 - on 5760x1080 it run significantly faster compared to 2560x1600 and 1920x1200 had pretty much the same framerates as 5760x1080
If that doesn't spell out the phrase "something is fishy with this review", then I don't know what would.
EDIT: I pointed out these things in the review discussion thread (there was a similar thing with the 7990, too). But no one seemed to care at all. Yet, I would LOVE to get an explanation or even a guess to WTH is wrong here (as something obviously is).
what i see on cleared pictures (thx apocolypes) looks more like pelidriver based apu (may be for next gen console) then discrete gpu. was exciting and the begging but at closer look it is more likely fake news. :(
I think we can all agree that the Titan is a powerful card at the cost of some extra moollah where the 7970 provides some less performance for considerably less moollah. Weather or not the 700-series cards will be more like Titan or not, we don't know, but what I will say is that regardless of what NVidia has up their sleeves, AMD is working on something else well.
I think everyone should calm down and acknowledge that both NVidia and AMD are both two very good companies that produce quality hardware and if you disagree with me then maybe you're being a fanatic and I'll challenge you to design a GPU that does better if people are going to continue bashing on people who are doing things that most here can probably only dream of.
I can find any number of reviews where we can see the TITAN hover at 35W or more over the 7970GHz as well as benches showing it being beaten by a bunch of frames in DiRT:S, Tomb Raider and probably some other not-so-known titles, as well as having breathing down it's neck or tying by the Radeon in Sleeping Dogs, Far Cry 3: Blood Dragon, Metro 2033, AvP 2010, Sniper Elite V2, Max Payne 3 and some games at 4K... As for compute... don't get me started.
And so could you very probably (heck, you just did)... so I don't really care, I just hate praising an overpriced piece of late hardware for things that aren't even true. What I was about to reply to him. LOL
Thanks.
Edit: So more on-topic. Like GF 7900GTX to 8800GTX (DX 9.0c to DX10) and Radeon HD 4870 to Radeon HD 5870 (DX10 to DX11) I see this Volcanic Islands card as another huge jump in performance... but related to the jump from HD to UHD more so than anything else, like API upgrade, because let's face it, none of today's card cut it for 3840x2160 gaming (not that it's here yet anyway)... I dislike multi-monitor setups so much that multi-GPU and subsequent issues with them are a none-issue for me from the get-go.
and is that volcanic islands diagram real?
The 7970GHz edition consumes less than 10 watts more than the GTX680 on full load , but it consumes less power than the 680 on idle and up to 11 watts less when the display is off, and that's makes it more power efficient than the 680. I wouldn't trust anandtech, and I certainly think that they are biased in favor of Intel and against AMD.
In the compute tests from toms hardware and techreport or even Hexus, you get a completely different picture. The 7970 does trash even the dual gpu 690 and blow it out of the water when it comes to shader performance in in GPGPU.
english isnt my native and i dont pretend to perfectly understand it but obviously i understand it quite worse then i thought. seams noone want (or can) comment what is shown on the picture so lets share more "test results" that favor "my grafic card".
anyway discusion went too far away from that to be useful. gl in diagram comparison
truth i out there
Also, Eyefinity doesn't actually draw every single pixel on the side monitors in the same ratio/aspect as on the primary monitor, due to fish-eye effect. So although the resolution of the monitors is 5670x1080/1200, the workload may not actually be that many pixels, depending on app.
Do keep in mind that W1zz used to write ATITool, and writes other AMD-specific clocking apps. Best I can tell, he really doesn't care who is faster, and has no agenda...notice we don't have ads here except on the front page. TPU is not a site driven by the opportunity to make money doing reviews...we all just provide the numbers, and you decide who you like based on the results. Because anyone can replicate our tests, in every review. For me, I actually hope you do test and check our numbers... I know you'll find you get the same results.
I guess we can all agree that a this point speculating on the performance of graphic cards that are yet to be released is pointless, as there no evidence whatsoever to back these facts, all we can do is wait and see, no point in fighting to try and show the world who has the biggest e-peen :p
It's all good, like I said, this could drag on forever, perhaps it's better to move on, for the sake of this thread :)
While now [H] doen't tell us the games used, but hopefully figure the 5 [H] used in that new review, which are different that the earlier 5. [H] drop Batman and Witcher (use 11% more watts than the 7970GHz) which as move the data against the GHz Edition. Also, in most of the titles the 7970GHz provide more Fps verse a 680, so we'd logial anticipate more power usage. Even Sleeping Dog [H] had to us the lower 1920x Res to have more Fps,
Going back to an average of the what a card require to complete the run-throughs of each game, and them take those five games add them together and divide by 5 is more real world anyway you slice it. That's only one game Crysis 2 on a specific run-through. Sure it looks good by that on one data point, but hardly is telling the whole story, when various titles have their average power usage and over a long period of playing each. Sure if all you play is "Crysis 2 at 1920x1200, Extreme profile, representing a typical gaming power draw. Highest single reading during the test" and then limited your play to that one small run-through each time then you can abide with that one point of data.