• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD's Vega-based Cards to Reportedly Launch in May 2017 - Leak

Before you reply that this is a stupid methodology, please put together something based on a methodology that you think is good and use that supporting data.

Ah, that little table from your original post explains where are you coming from ... I wouldn't use word stupid though, but slightly unscientific ... FLOPS is a unit and performance is measured using that unit (among other things such as fps but that's resolution dependent unlike flops). That little table in column 4 has a ratio "average frame rate per second / peak theoretical floating operations per second" ... and it would be great to have unitless ratio for that purpose, but very difficult because actual performance is measured in fps.

Since you are depicting a trend in your graph, it should work out if and only if all is measured in same resolution (either that or all resolution summary) and gpu usage is same in all games/samples. The second condition is a difficult one.
Why usage, because for example 8 TFLOPS GPU @75% usage skews your perf/flops ratio compared to say 8 TFLOPS GPU @100% usage.
As I said, good thing you extrapolated a linear trend out of the data points to combat the gpu usage skew.

I'm not even going into how in one game to calculate a single pixel you need X floating point operations, and for another game 2X floating point operations ... but again same games were used so relative trend analysis works out

I may have came across as a dick which wasn't my intention, but I admit that the fact that you are looking for a trend and relative amd vs nvidia relation between trends (using averages and same games) helps making your methodology work.
 
On par with TX will be good enough for AMD (and is actually reasonable to expect).

Wait lets look at this graph. 100% performance match to the 5970. Would you mind telling everyone how many GPU's the 5970 contained?

5870, a single chip, is listed at 91%. (ignoring the fact that TPU % charts were messed up in a major way until recently)
 
5870, a single chip, is listed at 91%. (ignoring the fact that TPU % charts were messed up in a major way until

Still took two to equal it 100% and that's a 1024x768 chart. I mean really come on.
 
HD 5970 being a CF card scales very poorly on 1024x . Save to say GTX 480 was far away from performing like two ATI Gpus. HD 5850 CF / 5970 which is essentially almost the same without overclock and especially HD 5870 CF wiped the floor with the GTX 480. The GTX 580 made a much better job at that but still wasn't 2x as fast. This is purely talking fps on high resolutions, not real picture quality. I think in that generation the HD 5850 was the best GPU price to performance wise and HD 5870 the best gpu performance wise without costing too much energy.
 
HD 5970 being a CF card scales very poorly on 1024x . Save to say GTX 480 was far away from performing like two ATI Gpus. HD 5850 CF / 5970 which is essentially almost the same without overclock and especially HD 5870 CF wiped the floor with the GTX 480. The GTX 580 made a much better job at that but still wasn't 2x as fast. This is purely talking fps on high resolutions, not real picture quality. I think in that generation the HD 5850 was the best GPU price to performance wise and HD 5870 the best gpu performance wise without costing too much energy.

I had crossfire back then. In the three games it actually worked in awesome. Fermi was a turning point for nvidia and to this date one of my favorite cards.

As for power consumption...um it wasn't that high, just high for the time. The 290 consumes more power so do the rest of the current top tier cards. The whole it uses less energy crutch is annoying. I didn't buy a gaming PC to save the ecosystem. Performance is performance and fermi offered more of it. They also consumed less power once you got the temps down.
 
I had crossfire back then. In the three games it actually worked in awesome. Fermi was a turning point for nvidia and to this date one of my favorite cards.

As for power consumption...um it wasn't that high, just high for the time. The 290 consumes more power so do the rest of the current top tier cards. The whole it uses less energy crutch is annoying. I didn't buy a gaming PC to save the ecosystem. Performance is performance and fermi offered more of it. They also consumed less power once you got the temps down.
Doesn't change the fact that the gtx 480 was not a efficient gpu and the HD 5870 much better all around.

Comparing older generations with new ones is kinda moot. The energy consumption for its performance was way too high. GTX 580 was okay, GTX 480 was a mess, it's like comparing ref 290 with custom 290 or the 390s.

You find it annoying I'm talking about efficiency and power consumption, fine. I'm not using my pc to save the environment too (look my specs) but I'd chose the more efficient gpu any day, that's my whole point. I always try to find a good balance.
 
Last edited:
580 overclocked had a habit of consuming just as much power as the 480 if not more...
 
580 overclocked had a habit of consuming just as much power as the 480 if not more...
But it was so much faster too. 32 Cuda cores more, much more efficient. Everything. I had a gtx 570 revised edition with smaller pcb that was a great card. Very efficient for a big Fermi.
 
But it was so much faster too. 32 Cuda cores more, much more efficient. Everything. I had a gtx 570 revised edition with smaller pcb that was a great card. Very efficient for a big Fermi.

It was an additional cluster unlocked over the 480 and the same basic core just a better process. You know the same issue we are seeing with the current AMD 480. Piss poor process, higher power consumption etc.
 
It was an additional cluster unlocked over the 480 and the same basic core just a better process. You know the same issue we are seeing with the current AMD 480. Piss poor process, higher power consumption etc.
I heard they reworked the architecture, I don't think it was the process, or only partly, as they had experience with the process with revamped GTX 275 series before it (the usual test drive GPUs for new processes). In the end it had the additional cluster + way better functioning GPU in general. GF110 != GF100.
 
No you are missing nothing. He posted graphs in one of the other threads claiming how there is performance per flop etc. Basically he used excel and is proud of it. Let him have his excel moment remember not everyone can use excel.

65821758.jpg
 
Back
Top