• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD "Vega 20" Optical-Shrunk GPU Surfaces in Linux Patches

How do you know ? They use a different process. TSMC is miles ahead of everyone.

It's not the process. Larger dies have been problematic to build for ages. If you understood a bit how chips are made, we wouldn't be having this conversation.
 
So, they wanted a compute chip, but then sold it as a gaming chip? Clearly they wanted a gaming chip, because they released vega 56/64.

If their goal was to have a compute chip, they failed massively, because they sold it as a gaming chip, not a compute chip. As you said, they could have scaled up polaris, the fact they didnt indicated they wanted vega to be their high end chip, and as a high end chip it failed. That is not GloFo's fault, that is AMDs.

Also, the 1050/ti are built on GloFo's 14nm, and dont have the same issues vega did (high power consumption, production issues). nvidia managed to make their arch work on two different processes without issue, AMD didnt get vega working properly on 1. If the process was an issue, nvidia wouldnt have used it. And yes, I know the 1050ti si a lot smaller, but the fact that they got it to work, with power consumption comparable to their TSMC parts, indicated that GloFo wasnt at fault for AMD's failures.

GP107/108 are made on Samsungs 14nm LPP, not GloFo. While they are similar, GloFo licensed Samsungs 14nm after all, they aren't exact the same. But yeah I kind of agree with you, heck 815mm² GV100 can reach same clocks with lower power consumption than what483 mm² vega 10 maximum clocks are. That it can't be all because of bad GloFo manufacturing process.
 
It's a shrink, how do you expect it to have a redesigned RAM controller?
I dont think the memory controller really matters for GCN it is very easy and simple to change those. Dont forget GCN is modular
 
Last edited:
So you think this is just temporary or will the big money move onto better things? Genuinely interested in how this plays out, also we might see the 2.4GBps HBM modules being used here.

I'm almost certain it's temporary, but of course don't know 100%. No one does.

me neither, because i don't want to help those miners to repay their cards and because they are abused

lol, gaming is more stressful. I'm glad at least you admitted your primary reason... you just hate miners. Good thing there are plenty willing to get the good deal your passing on.
 
Last edited:
I dont think the memory controller really matters for GCN is very easy and simple to change those. Dont forget GCN is modular

I wonder how different both technologies are actually, GDDR vs HBM. At the end of the day HBM is just 'stacked' DRAM chips, no different then a GPU with 8 to 12 GDDR chips around it. The difference is the lower power consumption, less overal space and much shorter latency compared to GDDR.
 
I'm almost certain it's temporary, but of course don't know 100%. No one does.



lol, gaming is more stressful. I'm glad at least you admitted your primary reason... you just hate miners. Good thing there are plenty willing to get the good deal your passing on.
Its true gaming is more stressful but thats only because an experianced miner undervolts and optimizes wattages, but logically if you game for 2 hours and you mine unoptimized 24/7 then mining would be worse on the card.
 
Its true gaming is more stressful but thats only because an experianced miner undervolts and optimizes wattages, but logically if you game for 2 hours and you mine unoptimized 24/7 then mining would be worse on the card.

But still, any of these cards should be able to run years straight at stock without issues. Most of the cards we are discussing haven't even been out that long, plus if mining they've been underclocked. I see no issues from buying used, honestly.
 
I dont think the memory controller really matters for GCN it is very easy and simple to change those. Dont forget GCN is modular

How can you add up something modular if such module doesn't exist? R&D tailoring, testing beta silicon etc... it is not a refresh.
 
testing beta silicon etc... it is not a refresh.
These days testing with actual sillicon is very rare. Most of these chips see a handful of manufacturing iterations at best.
 
These days testing with actual sillicon is very rare. Most of these chips see a handful of manufacturing iterations at best.

Proof. Beta silicon is beta. ES samples haven't ceased to be, nothing really is changed there.
 
Not likely given the present market. It goes up and down. We are presently down.
Where can I buy an RX 580 for MSRP in EU?
 
But still, any of these cards should be able to run years straight at stock without issues. Most of the cards we are discussing haven't even been out that long, plus if mining they've been underclocked. I see no issues from buying used, honestly.
The GPU itself wont degrade but the rest of the card could, especially the electrolytic capacitors.
Crappy 5000 hours rated caps would reach their expected lifetime in only 8 months..
 
Also, the 1050/ti are built on GloFo's 14nm, and dont have the same issues vega did (high power consumption, production issues).
What "production issues"? AMD had issues getting enough HBM2 and even GDDR, but not pumping the chips.
 
The GPU itself wont degrade but the rest of the card could, especially the electrolytic capacitors.
Crappy 5000 hours rated caps would reach their expected lifetime in only 8 months..

Actually, no. Capacitors are rated at 100 or 90C and they will NEVER hit that in a case, exponentially increasing their lifespan beyond their rating...

Where can I buy an RX 580 for MSRP in EU?

Soon.
 
The manufacturing process works fine for nvidia, the problem is AMD's design, not the manufacturing process.
How do you know ? They use a different process. TSMC is miles ahead of everyone.
It's not the process. Larger dies have been problematic to build for ages. If you understood a bit how chips are made, we wouldn't be having this conversation.
This is a good opportunity to remind everyone what nvidia was able to do with maxwell gm204 semi-functional dies aka gtx970 ... they cut out couple of bad modules inside a module that is inside a module and made the strongest x70 gpu relative to x80 one ... when amd cuts, it's like they have to use a chain saw, so to speak, there's no fine modularity in their monolithic arch
 
This is a good opportunity to remind everyone what nvidia was able to do with maxwell gm204 semi-functional dies aka gtx970 ... they cut out couple of bad modules inside a module that is inside a module and made the strongest x70 gpu relative to x80 one ... when amd cuts, it's like they have to use a chain saw, so to speak, there's no fine modularity in their monolithic arch
Fine and dandy with a “mainstream design” but we all know Vega design, manufacturing/assembly is anything but “normal” apples and oranges in this case.
 
Last edited:
This is a good opportunity to remind everyone what nvidia was able to do with maxwell gm204 semi-functional dies aka gtx970 ... they cut out couple of bad modules inside a module that is inside a module and made the strongest x70 gpu relative to x80 one ... when amd cuts, it's like they have to use a chain saw, so to speak, there's no fine modularity in their monolithic arch

I would say Vega is pretty modular, going from integrated graphics to 500 mm^2 dies is proof of that.

There is a reason why AMD limited themselves with medium sized dies for Zen and Polaris , power and clocks wouldn't have scaled nicely most likely. They did their best to avoid huge dies but with Vega , unfortunately they couldn't do that.
 
This is a good opportunity to remind everyone what nvidia was able to do with maxwell gm204 semi-functional dies aka gtx970 ... they cut out couple of bad modules inside a module that is inside a module and made the strongest x70 gpu relative to x80 one ... when amd cuts, it's like they have to use a chain saw, so to speak, there's no fine modularity in their monolithic arch
Their designs, their choices. I'm not going to pat anyone on the back for them. If they yield me a product within my price and performance requirements, good. Otherwise, no sale.
 
Fine and dandy with a “mainstream design” but we all know Vega design, manufacturing/assembly is anything but “normal” apples and oranges in this case.
Well it may be an apple to your oranges, but I was referring to three posts I quoted ...
I would say Vega is pretty modular, going from integrated graphics to 500 mm^2 dies is proof of that.
... and that's it, one level of modular design ... if you look at nvidia arch there are three levels when I last checked ... and each level with its own cache
I'm not going to pat anyone on the back for them.
Pat yourself on the back then :) ... I mean good design is a good design
 
Well it may be an apple to your oranges, but I was referring to three posts I quoted ...

... and that's it, one level of modular design ... if you look at nvidia arch there are three levels when I last checked ... and each level with its own cache

Pat yourself on the back then :) ... I mean good design is a good design
Nvdia has 6 levels recently
 
Lots of SKUs carved outta Polaris
... as it should be, and GCN arch does fine in everything except in reaching optimal power efficiency ... you gotta have tiled rasterizer for that ... and for tiled rasterizer you gotta have cache on every level of arch hierarchy ... modular/hierarchical arch also helps you with yields (that's why I mentioned gtx970)
 
Back
Top