Tuesday, June 5th 2018
AMD Demonstrates 7nm Radeon Vega Instinct HPC Accelerator
AMD demonstrated the world's first GPU built on the 7 nanometer silicon fabrication process, a Radeon Vega Instinct HPC/AI accelerator, with a 7 nm GPU based on the "Vega" architecture, at its heart. This chip is an MCM of a 7 nm GPU die, and 32 GB HBM2 memory stacks over four stacks (4096-bit memory bus width). It's also the first product to feature a removable InfinityFabric interface (competition to NVIDIA's NVLink interface). There will also be variants based on the common PCI-Express 3.0 x16. The card supports hardware virtualization and new deep-learning ops.
100 Comments on AMD Demonstrates 7nm Radeon Vega Instinct HPC Accelerator
Although mine did measure to around 520 and I have played couple of times with images to check sizes before and they ended up in the same vicinity.
- GamersNexus measured it at 20.25 mm × ~26 mm = 526.5 mm²
- PC Perspective measured it at 25.90 mm × 19.80 mm = 512.82 mm²
PCPer's update on the size triggered Raja's tweet.
Do we have another source for the 484 mm² figure?
Closest perfect square to the 510-520 measured sizes would be 23² = 529, wouldn't it?
I'm not sure what the problem with OpenCL is, CUDA should have died a long time ago of OpenCL was any good.
Do you have 1 more source?
Earlier the gpu-race was about to get to '1440p 60-144fps' ultra (which 1080Ti does pretty well in almost anything on the market)
The "NEXT" challenge is 4K at 144 fps; Not 4K at 60fps+ because 1080Ti already does that too.
So right now the main reason to get a card that performs better than 1080Ti is if you own on of the new 4K 144hz screens (priced at 3000 dollars), or if you have some other graphichintensive purpose that only a 1080Ti+ can solve (but gaming is not one of them.) The thing is that the 4k 144hz screens will take years before they are down in price and until that happens to a growing part of people 1080Ti performance is "good enough" for any game, any screen out there (unless you are ultrawidescreen 4096x1440 or something). My be is that we will see a slowdown in performance leaps for some years and a more competitive focus on price. So the big question is; can AMD price their GPU competitive and can they keep miners away from them?
I have a 1440p 165 hz screen with gsync and a 1080Ti. If I got a card that was twice as good = cool, but in almost any game i would never notice since my fps would be 100+ and with gsync on anyway. Im not arguing your opion, im just saying that I think alot of other people will puts value for money first (and as said before the focus will shift back to value for money since we are in a valley where king of the hill performance does not "take any hills the others will not be able to climb anyway" (you can quote me on the last part that was brilliant of me....)...
if you buy a 1080ti only max settings are acceptable xD
On maxed-ish settings and new games 1080Ti will not do 1440p@144/165 or UHD@60, it tends to fall short. More performance would still be nice.
At the same time, if either AMD or Nvidia can do what Nvidia did last time with GTX1070 - same performance at 60% TDP - that would be very nice as well.
Although I do quite trust this guy's measurements. Granted, this is Engineering sample but they cannot be that different, can they? [MEDIA=flickr]24FgMzM[/MEDIA]
Does not change the fact that everyone seems to measure it to a slightly larger size.
.
And fury was like 500mhz with 4gb hbm huge difference
Stagnation in GPU = a step back. And the result of that is that playable performance at all levels becomes more expensive. If GPUs don't get more powerful each gen on EACH tier in the product stack, it means you will pay the same or even more for the same performance as last year and this does not just apply to the 1080ti performance level... Why? Because games do get more demanding over time and they will edge closer to a higher GPU tier every year. Pascal is already out for nearly two years now and there is no news about anything that will top it. Turing and an 1180? Same-ish performance most likely. AMD's Vega? Not a gaming GPU (surely you don't still think they will keep trying that).. and it never was.
Your statement that the next challenge is 4K144fps is complete and utter nonsense. You're talking about a top 1% that might consider that between now and the next 2-3 years. If its even that. It combines a GPU and CPU bottleneck with a near-guarantee that you won't ever comfortably hit that target. If you want to burn everything on computer hardware... Either way, AMD is not going to move us closer to that target with Vega; not with 35% more performance and not even with 50% more performance.
In other news...
wccftech.com/exclusive-amd-navi-gpu-roadmap-cost-zen/
Mind you I don't usually link WCCFtech but this seems legit. Additionally, this article speaks of Radeon Instinct which is not the gaming GPU and everybody knows that if AMD tries to pull another Vega for gamers after the silly show called HBM-based gaming SKUs they have literally lost the plot. The only reason we got Vega in the first place (and the reason it was so badly optimized and still is) is for AMD to have a bucket to toss the leftovers into that didn't make it into Frontiers or MI25 cards. Because those do sell for 2-3x the price. This was already clear months ago, but nobody wanted to believe it (blaming lack of cards on 'miners and HBM supply', when in reality the cards never even got to the marketplace).... And here we are, today.
Navi is not being developed strictly because of PS5 either it just so happens to be the next gpu sony will use because it has to lol.
i dont like it when tech sites stretch the truth and add personal opinions to it. (talking out of your ass) xD
This also puts Raja's PR moves in a whole other light: the man was literally just lying to us and he knew damn well he did. 'Poor Volta'... aimed at the gamer crowd, remember.
its plausible sure that the console part did what it did.
HD 6970 - December 2010
HD 7970 - January 2012 - June 2012 (GHz)
R9 290X - October 2013
after that it slowed down
R9 FURY X - June 2015
RX Vega 64 - August 2017
'Refinements'... that's just a rebrand on a smaller node, right? :P