• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Clock for Clock, Vega VS FuryX and discussion

  • Thread starter Thread starter Deleted member 50521
  • Start date Start date
D

Deleted member 50521

Guest
https://www.computerbase.de/2017-08...ts-escalation-rx-vega-56-vs-vega-64-vs-fury-x

2017.08.16-12.09_01.png


So basically the performance per clock cycle, which in theory should be purely from improved design, only gives a tiny amount of performance boost. The most performance boost comes from the bumped clock rate.

http://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review/2
Anandtech said:
That space is put to good use however, as it contains a staggering 12.5 billion transistors. This is 3.9B more than Fiji, and still 500M more than NVIDIA’s GP102 GPU. So outside of NVIDIA’s dedicated compute GPUs, the GP100 and GV100, Vega 10 is now the largest consumer & professional GPU on the market.

Given the overall design similarities between Vega 10 and Fiji, this gives us a very rare opportunity to look at the cost of Vega’s architectural features in terms of transistors. Without additional functional units, the vast majority of the difference in transistor counts comes down to enabling new features.

Talking to AMD’s engineers, what especially surprised me is where the bulk of those transistors went; the single largest consumer of the additional 3.9B transistors was spent on designing the chip to clock much higher than Fiji. Vega 10 can reach 1.7GHz, whereas Fiji couldn’t do much more than 1.05GHz. Additional transistors are needed to add pipeline stages at various points or build in latency hiding mechanisms, as electrons can only move so far on a single (ever shortening) clock cycle; this is something we’ve seen in NVIDIA’s Pascal, not to mention countless CPU designs. Still, what it means is that those 3.9B transistors are serving a very important performance purpose: allowing AMD to clock the card high enough to see significant performance gains over Fiji.

This feels like Pentium4 and Faildozer to me. RTG tries to maximize clock hoping it will give them some performance boost. What they ended up getting is a power hungry monster. I am sure @W1zzard would do some in depth analysis of Vega design in the future.
 
So Vega just boils down to an overclocked Fiji give or take a few features? No wonder it's so crap. :rolleyes:
 
Last edited:
So Vega just boils down to an overclocked Polaris give or take a few features? No wonder it's so crap. :rolleyes:

I'm begining to think when AMD spun off RTG, they basically gave them two sticks and some flint and said "here, make a GPU"

It's only natural they are rehashing old stuff.
 
So Vega just boils down to an overclocked Polaris give or take a few features? No wonder it's so crap. :rolleyes:
You mean Fiji?
 
I'm begining to think when AMD spun off RTG, they basically gave them two sticks and some flint and said "here, make a GPU"

It's only natural they are rehashing old stuff.

Point is, a 14nm shrink of Fiji with 8GB HBM2 would make more sense. Although it may not clock as high as Vega, probably still in the 1200~1300 ball park. But still that would have saved RTG a whole lot of development cost as well as time. And honestly, a shrink down Fiji clocked around 1200~1300 released in 2016 wouldn't be too bad when facing 1080.
 
Uh? Why the hate? Vega is 5% faster than Fiji clock for clock with the ability to clock 62% higher. Fiji was no power sipper either.

RTG wasn't lying when they said Vega was the largest change to GCN since the architecture debuted.

Vega is not enough to dethrone GP102 in gaming but it does in pretty much everything else. This is typical for GCN versus NVIDIA since Maxwell debuted.
 
Point is, a 14nm shrink of Fiji with 8GB HBM2 would make more sense.
That's kind of what they did, since most of the extra transistors went into increasing pipeline stages to allow high clocks at latency expense. Simple shrink would not clock, not nearly enough.
Clock for clock they are few percents apart, thanks to new caching system perhaps.
 
Well the Fury X does come close to the 1070 in performance
 
Coming soon to celebrity rehab
If fury does coke then vega does rapidly packed meth

I mean rapid packed math


Regarding the R9 Fury - 1070 comparion, I think Fury (3584sp) has done exceptionally well in comparison with the 1070, but that's owing to nvidia using cheap ass 8GHz DDR5 on the 1070 and Fury having ridiculous bandwidth. They're tied @4K in most 2016/17 games. Even 1060 got upgraed to 9GHz while 1070 stayed on 8Ghz. If Volta comes with GDDR6 controller I don't think nvidia will use DDR5/DDR5X on 2070 tho, it'll get GDDR6 as well as the 2080 while Vega 56's bandwidth is significantly slower than the 64 version. That leads me to believe that while vega 56 is a clear performance winner against 1070 ATM, it might not stack up so well against the new gen cards from nvidia as R9 Fury did, especially as the resolution goes up.
 
Last edited:
If fury does coke then vega does rapidly packed meth

I mean rapid packed math


Regarding the R9 Fury - 1070 comparion, I think Fury (3584sp) has done exceptionally well in comparison with the 1070, but that's owing to nvidia using cheap ass 8GHz DDR5 on the 1070 and Fury having ridiculous bandwidth. They're tied @4K in most 2016/17 games. Even 1060 got upgraed to 9GHz while 1070 stayed on 8Ghz. If Volta comes with GDDR6 controller I don't think nvidia will use DDR5/DDR5X on 2070 tho, it'll get GDDR6 as well as the 2080 while Vega 56's bandwidth is significantly slower than the 64 version. That leads me to believe that while vega 56 is a clear performance winner against 1070 ATM, it might not stack up so well against the new gen cards from nvidia as R9 Fury did, especially as the resolution goes up.
Considering Volta is over a year away which gives time for AMD to release another series as well that may keep up
 
These and some other oddities suggest they simply did not have time/resources to make the best out of Vega which shouldn't come as surprise. It is the biggest jump architecturally speaking since the the switch from TeraScale to GCN.

They launched Zen in the desktop and server market in less than a year , a pretty big endeavor for a company like AMD that has been latent at best in last few 4-5 years in the CPU market , resources and manpower sure have been sparse. The hardware is clearly there , the software isn't , story of their life at RTG for the last couple of years in the GPU segment.
 
Considering Volta is over a year away which gives time for AMD to release another series as well that may keep up

And Nvidia can possibly refine Pascal even more, though I dont see the clocks going higher so maybe if they did a respin it would be for even lower power.
12nm Finfet production is ramping up Q4 (according to TSMC) and this is for Volta consumer end. It is possible Nvidia can release Volta based cards in Spring but who knows.
 
And Nvidia can possibly refine Pascal even more, though I dont see the clocks going higher so maybe if they did a respin it would be for even lower power.
12nm Finfet production is ramping up Q4 (according to TSMC) and this is for Volta consumer end. It is possible Nvidia can release Volta based cards in Spring but who knows.

I really , really doubt there is much they can squeeze out of Pascal , I mean Maxwell. Physics laws have their limits.
 
And Nvidia can possibly refine Pascal even more, though I dont see the clocks going higher so maybe if they did a respin it would be for even lower power.
12nm Finfet production is ramping up Q4 (according to TSMC) and this is for Volta consumer end. It is possible Nvidia can release Volta based cards in Spring but who knows.
I doubt Nvidia would ever do that

They will just milk what they have
 
I really , really doubt there is much they can squeeze out of Pascal , I mean Maxwell. Physics laws have their limits.

Well, to be totally blunt, Nvidia refined and added sauce to Maxwell and made a card that bent Fiji's successor over the fence and spanked it so hard it hurts. And if they can't refine Pascal any further, hello Volta.

I doubt Nvidia would ever do that

They will just milk what they have

Milking what's you do when the competition doesnt show up. It's business 101. Just like making a card for mining.
 
Well, to be totally blunt, Nvidia refined and added sauce to Maxwell and made a card that bent Fiji's successor over the fence and spanked it so hard it hurts. And if they can't refine Pascal any further, hello Volta.

But Volta seems to be Pascal with Tensor Cores. Look at V100 : 5120 CUDA cores at 1455 Mhz Boost clock ( BOOST CLOCK ! ) which results in 300W TDP . You might think that is clearly more efficient but really it's not. The shader count is huge but the clock speed is not and remember that more than often the big contributor to loss in power efficiency is clock speed not shader count/die space. They are playing it safe for a reason. I really can't see them being able to pull much higher clocks , it's just not going to happen. They might simply increase the core count on their cards , just like they did when going from 600 to 700 series. Problem is , why would they do that ? Why bother.

Volta as it is right know may never see the light of day as a consumer part.
 
But Volta seems to be Pascal with Tensor Cores. Look at V100 : 5120 CUDA cores at 1455 Mhz Boost clock ( BOOST CLOCK ! ) which results in 300W TDP . You might think that is clearly more efficient but really it's not. The shader count is huge but the clock speed is not and remember that more than often the big contributor to loss in power efficiency is clock speed not shader count/die space. They are playing it safe for a reason. I really can't see them being able to pull much higher clocks , it's just not going to happen. They might simply increase the core count on their cards , just like they did when going from 600 to 700 series. Problem is , why would they do that ? Why bother.

Volta as it is right know may never see the light of day as a consumer part.

I hear what you are saying but all I can think of is how wrong people keep on being about how much Nvidia can do.

And Nvidia don't just rehash maxwell. A lot of changes go on under the hood to make the shrinks and clocks work well. In Pascal the warp scheduler was changed and it's been redone in Volta (Ithink but could be wrong). This is why a properly core loaded Nvidia Pascal GPU (think at least 1080ti for a good core count) does Async well. I mean, a 1080ti still beats anything AMD has at Async and Vulkan. Maybe not by as much as other titles but less cores, less engine power yet still, on top.

I wouldn't rule Volta out of being capable of some mystical magical hoodoo. That being said, my wallet can definitely wait it out.
 
I hear what you are saying but all I can think of is how wrong people keep on being about how much Nvidia can do.

And Nvidia don't just rehash maxwell. A lot of changes go on under the hood to make the shrinks and clocks work well. In Pascal the warp scheduler was changed and it's been redone in Volta (Ithink but could be wrong). This is why a properly core loaded Nvidia Pascal GPU (think at least 1080ti for a good core count) does Async well. I mean, a 1080ti still beats anything AMD has at Async and Vulkan. Maybe not by as much as other titles but less cores, less engine power yet still, on top.

I wouldn't rule Volta out of being capable of some mystical magical hoodoo. That being said, my wallet can definitely wait it out.
My wallet definitely couldn't hold out. :roll:

I need a GPU capable of 4K@120 now
 
I really , really doubt there is much they can squeeze out of Pascal , I mean Maxwell. Physics laws have their limits.
Oh they can.
Basically, they can shift the GPUs any time (e.g. 2070 := 1080) and add a new flagship (low volume model based on Volta).
1050(Ti) are artificially limited and could clock as high as 1060.
So we're looking at a potential +30% performance in theoretical "Pascal refresh".
Vega is not enough to dethrone GP102 in gaming but it does in pretty much everything else.
But GP102 is a gaming chip - NVIDIA designed and optimized it to dominate gaming benchmarks and it does. Who cares about "pretty much everything else"?
 
Lol I'm not surprised in the slightest, I expected this, isn't Nvidia's Pascal pretty much Maxwell highly overclocked with some tweaks? Not that it's a bad thing, it's just what it is.
 
Lol I'm not surprised in the slightest, I expected this, isn't Nvidia's Pascal pretty much Maxwell highly overclocked with some tweaks? Not that it's a bad thing, it's just what it is.

Maxwell has been giving AMD sleepless nights for years, nothing new.
 
Comparing GPU's clock to clock
:roll:
 
Back
Top