• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX Vega Preview

HA! What better card should I have invested in?

I'll donate them so they will still get some good use, rather than let them be beaten to death and thrown away.

And to call a pair of pristine, low-mileage Powercolor PCS+ 290x's "junk" is laughable. These will be "precious gems" in the future if they are taken care of. But I love my tech, and try to take good care of it. Hell my 4850s and 6870s still run just fine.

JAT

Well that's where we're different. Some of us are collectors and some of us throw stuff away. I'd rather have a clean house than save stuff I won't ever use again just to look at it.
 
I have waited for three long years to replace my 290x Crossfire setup. Been holding out for Vega since last year, then assumed we would see Vega in February 2017, then saw the 1080ti drop, and thought NV must know AMD is bringing something good to the table, so I waited. Now this has landed and it seems the LE liquid cooled Vega64 cannot be purchased without a bundle, and lots of heat/power for a year old 1080 competitor.

So, I just pulled the trigger on the Zotac 1080ti Amp Extreme Core, my first Nvidia card (not counting an MX400 spare from forever ago)... I guess @cdawall was right...I should have ordered it back in May.

Sorry AMD, I tried, but this launch has been a complete and utter failure in my opinion.

JAT
Since you waited so long you could have at least waited one more week for reviews.
 
These tests were done on an Intel Core i7-7700K at 4.2 GHz

ab8.jpg
 
It's the best CPU for gaming, even AMD know it. ;0
 
Since you waited so long you could have at least waited one more week for reviews.
At this point, it's 95% clear that he made the right decision. But yeah, after holding back for so long, I would've given it one more week, too.
 
At this point, it's 95% clear that he made the right decision. But yeah, after holding back for so long, I would've given it one more week, too.
+1
 
Since you waited so long you could have at least waited one more week for reviews.

At this point, it's 95% clear that he made the right decision. But yeah, after holding back for so long, I would've given it one more week, too.


I've been waiting since January, when the rumors first were hinting it would arrive "shortly" after Polaris. Now this is the big reveal, and a company historically known for cherry-picking their bench numbers compares the new "Flagship" Vega to a 980ti and a 1080, that spoke volumes to me. On top of that, the limited selling of the LE and liquid editions to "bundles" is just ridiculous. I have a stable foundation and wanted a new video card...not an entire new platform.

To be honest, I hope this was a stupid decision and AMD has been sandbagging this entire time...that is what I have been hoping since mid-April when all info Vega was shut down after the 1080ti launch. But the "reveal" Sunday night is pretty hard to rationalize them sandbagging. This was the time to show off the "true power" of Vega and build brand awareness for the 14th when NDA lifts, and get promo materials in vendors hands prior to store shelves being stocked...but instead they compared it to a card that launched in June 2015 (980ti) and one launched in May 2016 (GTX1080), the latter of which it traded blows with, but clearly did not "beat hands down"...oh and uses more power along the way. I don't think any changes in the next week will give this card 25-45% more performance, but I sure hope I'm wrong...

Anyhow, I got a great deal on my 1080ti. Why wait?
 
I've been waiting since January, when the rumors first were hinting it would arrive "shortly" after Polaris. Now this is the big reveal, and a company historically known for cherry-picking their bench numbers compares the new "Flagship" Vega to a 980ti and a 1080, that spoke volumes to me. On top of that, the limited selling of the LE and liquid editions to "bundles" is just ridiculous. I have a stable foundation and wanted a new video card...not an entire new platform.

To be honest, I hope this was a stupid decision and AMD has been sandbagging this entire time...that is what I have been hoping since mid-April when all info Vega was shut down after the 1080ti launch. But the "reveal" Sunday night is pretty hard to rationalize them sandbagging. This was the time to show off the "true power" of Vega and build brand awareness for the 14th when NDA lifts, and get promo materials in vendors hands prior to store shelves being stocked...but instead they compared it to a card that launched in June 2015 (980ti) and one launched in May 2016 (GTX1080), the latter of which it traded blows with, but clearly did not "beat hands down"...oh and uses more power along the way. I don't think any changes in the next week will give this card 25-45% more performance, but I sure hope I'm wrong...

Anyhow, I got a great deal on my 1080ti. Why wait?
If it's a great deal, I'd say go for it :)
 
Wasn't always that way but I think Pascal (GP100) was the turning point in recent years.
Maybe even in Kepler era with Titan Black (and Titan Z) with extra double precision units (non-disabled DP cores) ... neural networks weren't gpu accelerated back then and only DP performance was an extra for compute crowd.
If you are thinking about Tesla line, it's been completely separate since G80.
 
I would like you to correct the article. You say these new Vegas have a 4096-bit bus, which is not true. Like HBM 1, each HBM2 stack has a 1024-bit bus. As these new vegas have only 2 Stacks, so the total bus is 2048 Bits. The only one that actually has 4096 bits is Vega Frontier.
 
  • Like
Reactions: VSG
I would like you to correct the article. You say these new Vegas have a 4096-bit bus, which is not true. Like HBM 1, each HBM2 stack has a 1024-bit bus. As these new vegas have only 2 Stacks, so the total bus is 2048 Bits. The only one that actually has 4096 bits is Vega Frontier.

Thanks, it was a time rush but I should have caught that. It's corrected now.
 
I see new drivers will have some sorely missing features, nice effort from the software team ... also quietly, as of five days ago, dx12/vulkan game devs can profile their games on Radeon: http://gpuopen.com/radeon-gpu-profiler-1-0/
RGP-SystemActivity_1.png
 
Last edited:
AMD has stated, that TGP (Total Graphics Package) for Vega 56 is 165W, this includes GPU + Memory + Interposer.. so looking at about 150W TDP, Nano will be 150W TGP and 64 @ 220 Max TGP.
Another Nano GPU with super-aggressive throttling. Say hello to unstable frame rates.

That's where they fail. You can place as much hardware scheduling and load balancing on the GPU as possible if the drivers are still limited by the CPU since you send all commands in a serial fashion on just one thread it will be all for nothing. 4096 shaders is huge , DX 11 performance is bound to be bad.

If you use Vulkan and DX12 this is supposed to be fixed. But Vulkan adoption rate is low and DX12 implementations are abysmal for the most part.
Well, Nvidia makes it work, so you can't blame the APIs. GCN is simply inefficient in keeping the shader processors fed.

There is a confusion here between single precision floating point frame buffer (HDR lighting and bloom algorithm that exists since 2004), and HDR 10 bit or 12 bit display output that requires new display (also the game has to support both FP frame buffer and hd range output).
I just want to add, DirectX since version 9 or so have supported 16-bit per channel HDR, which is then tone mapped to 8-bit "SDR". But internally, all calculations are done at 32-bit, which obviously is a waste of performance.

I would like you to correct the article. You say these new Vegas have a 4096-bit bus, which is not true. Like HBM 1, each HBM2 stack has a 1024-bit bus. As these new vegas have only 2 Stacks, so the total bus is 2048 Bits. The only one that actually has 4096 bits is Vega Frontier.
Actually, Vega FE also has 2048-bit. It just has more chips in each stack.
 
Not competitive enough on price. This will be Fury all over again. Will get AMD nowhere.
 
Well, Nvidia makes it work, so you can't blame the APIs. GCN is simply inefficient in keeping the shader processors fed.

When did I said that ? I simply said their hardware isn't going to perform the best under those APIs. GCN is not inefficient , it just needs a different approach , an approach that hasn't been compatible with the way previous APIs work out of the box.
 
Last edited:
But internally, all calculations are done at 32-bit, which obviously is a waste of performance.
It does seem wasteful, but the way it was used, full precision for image based lighting and tone mapping for exposure control, produced much better results than any kind of post processing shader approximation back then. Oblivion was a beautiful waste of performance in 2004 damned be antialiasing :laugh:
 
Unfortunately the higher frequency comes at a price. Vega, if I am not mistaken, isn't performing the same as a Fiji at the same frequency. I was afraid of this. If it was performing as a Fiji, at 1600Mhz would have been probably in the middle, between 1080 and 1080Ti.
We need to wait and see real world benchmarks. VEGA is quite different over the Fiji.
 
This will not happen but what if AMD purposely disabled something(s) on the drivers that made Vega perform worse than both 1080 and 1080 Ti so that, when finally unveiling the cards with proper reviews, Vega would pull quite a bit ahead of the 1080 Ti?

Definitely won't happen ...
 
We need to wait and see real world benchmarks. VEGA is quite different over the Fiji.
Yes we need. But based on those preliminary numbers, performance is not there.

This will not happen but what if AMD purposely disabled something(s) on the drivers that made Vega perform worse than both 1080 and 1080 Ti so that, when finally unveiling the cards with proper reviews, Vega would pull quite a bit ahead of the 1080 Ti?

Definitely won't happen ...

That would be nice, but there is no logic to that. I mean many people where waiting to see what Vega can do before opening their wallets for a hi end gaming card. Most of them have already opened their wallets and ordered a GTX card after seeing those first numbers. If someone at AMD took the decision to disable something in the drivers to make Vega look worst, even for just a 15 days period, not only should be fired yesterday, but also get sued from AMD's shareholders for lost profits.
 
When did I said that ? I simply said their hardware isn't going to perform the best under those APIs. GCN is not inefficient , it just needs a different approach , an approach that hasn't been compatible with the way previous APIs work out of the box.
Direct3D 12 and Vulkan are architecture independent APIs, they have nothing to do with scheduling inside the GPU. GPU threads, data dependencies, memory fetching, etc. are all controlled by the GPU itself. When AMD can't keep their GPUs fed, that's their problem.
 
Direct3D 12 and Vulkan are architecture independent APIs, they have nothing to do with scheduling inside the GPU. GPU threads, data dependencies, memory fetching, etc. are all controlled by the GPU itself. When AMD can't keep their GPUs fed, that's their problem.

DX11 draw calls are single threaded by default , Nvidia's drivers allows for these commands to be dispatched using multiple threads because since Kepler they stripped down much of the scheduling from the GPU and leveraged it to the drivers. AMD can do this too but because of their underlying architecture it can't be done automatically at the driver level like in Nvidia's case. DX12 and Vulkan can directly dispatch these commands by the use of multiple threads thus removing the need for this to be done at the driver level.

It's obvious that if most of the control flow takes place at the driver level one can utilize the hardware more efficiently at the cost of higher CPU overhead , so this does in fact have to do with how scheduling works both at the GPU level and at the software level.

All of these APIs are hardware independent but doesn't mean they are are a perfect fit out of the box for every piece hardware ever.

Most of them have already opened their wallets and ordered a GTX card after seeing those first numbers.

I am doubtful of that , the bulk of the people who wanted a 1080/1080ti bought one long ago. The amount of people that really did wait up until now is most likely very small.
 
Last edited:
DX11 draw calls are single threaded by default , Nvidia's drivers allows for these commands to be dispatched using multiple threads because since Kepler they stripped down much of the scheduling from the GPU and leveraged it to the drivers. AMD can do this too but because of their underlying architecture it can't be done automatically at the driver level like in Nvidia's case. DX12 and Vulkan can directly dispatch these commands by the use of multiple threads thus removing the need for this to be done at the driver level.
Multiple threads building a single queue require synchronization which introduces overhead. Multiple queues are only used for separate workloads; like physics or a separate rendering pass.
The lack of scaling for GCN has nothing to do with multithreading.
 
Multiple threads building a single queue require synchronization which introduces overhead. Multiple queues are only used for separate workloads; like physics or a separate rendering pass.
The lack of scaling for GCN has nothing to do with multithreading.

Nvidia's drivers allow for multithreaded draw calls to be decoupled from what the API does. AMD's do not.

You can argue with me all day , this is a well known fact : AMD's poor performance along the last couple of years had everything to do with a lack of multithreaded drivers.
 
Last edited:
Back
Top