Wednesday, January 9th 2019

AMD Announces the Radeon VII Graphics Card: Beats GeForce RTX 2080

AMD today announced the Radeon VII (Radeon Seven) graphics card, implementing the world's first GPU built on the 7 nm silicon fabrication process. Based on the 7 nm "Vega 20" silicon with 60 compute units (3,840 stream processors), and a 4096-bit HBM2 memory interface, the chip leverages 7 nm to dial up engine clock speeds to unprecedented levels (above 1.80 GHz possibly). CEO Lisa Su states that the Radeon VII performs competitively with NVIDIA's GeForce RTX 2080 graphics card. The card features a gamer-friendly triple-fan cooling solution with a design focus on low noise. AMD is using 16 GB of 4096-bit HBM2 memory. Available from February 7th, the Radeon VII will be priced at USD $699.

Update: We went hands on with the Radeon VII card at CES.
Add your own comment

153 Comments on AMD Announces the Radeon VII Graphics Card: Beats GeForce RTX 2080

#76
xkm1948
AdoredTV apparently is pissed off by Radeon 7 as well. When a diehard AMD fan becomes angry with your GPU and calling it "shit" you know how f*ucked up the product is.

Posted on Reply
#77
15th Warlock
Xzibit said:
Doesn't it have x2 more ROPs then before.
I think we can both agree that even doubling the number of ROPs won't match the sheer performance gain from adding more CUs, especially since these new CUs will run at a higher clock.

Instead, we see a regression in the number of CUs, and yes, double the ROPs but no clear gains from an almost 40% smaller fab process.

It's mind boggling. Nvidia screwed up by dedicating more silicon to RT and tensor units, silicon that could've been used for more cuda cores, and more performance, but they bet the farm on ray tracing and DLSS.

If you ask me, AMD is making a very similar mistake, less CUs, but double the HBM2 raising the price to the point where they cannot be competitive with Nvidia, like they've always have been.

A missed opportunity for both camps, but hindsight is 20/20.
Posted on Reply
#78
Eric3988
$700 for a shrunk Vega doesn't have me excited as a Vega owner. 16GB of HBM2 sounds great as does the triple fan setup. I'll wait for the reviews, but more money for old technology won't have me reaching for the wallet.
Posted on Reply
#79
Totally
15th Warlock said:
I think we can both agree than even doubling the number of ROPs won't match the sheer performance gain from adding more CUs, especially since these new CUs will run at a higher clock.

Instead, we see a regression in the number of CUs, and yes, double the ROPs but no clear gains from an almost 40% smaller fab process.

It's mind boggling. Nvidia screwed up by dedicating more silicon to RT and tensor units, silicon that could've been used for more cuda cores, and more performance, but they bet the farm on ray tracing and DLSS.

If you ask me, AMD is making a very similar mistake, less CUs, but double the HBM2 raising the price to the point where they cannot be competitive with Nvidia, like they've always have been.

A missed opportunity for both camps, but hindsight is 20/20.
Tbh, this looks like AMD taking advantage of the situation to unload some inventory that it was otherwise going to write-off.
Posted on Reply
#80
FordGT90Concept
"I go fast!1!11!1!"
xkm1948 said:
AdoredTV apparently is pissed off by Radeon 7 as well. When a diehard AMD fan becomes angry with your GPU and calling it "shit" you know how f*ucked up the product is.


Naw, he just knew it wasn't anything to get excited about which was pretty well understood months ago when they called it "Vega 20." Tech shrinks are never miracle workers.

Totally said:
Tbh, this looks like AMD taking advantage of the situation to unload some inventory that it was otherwise write-off.
The 60 CUs certainly suggests that. All of the good chips are binned for Radeon Instinct products. If Radeon Instinct demand falls off, we might see AMD debut a 64 CU gaming card.


AMD is betting the gaming house on Navi.
Posted on Reply
#81
Th3pwn3r
Well, at least I wasn't expecting anything great from AMD for a while. Still, this is good because it'll force price reductions(fingers crossed).
Posted on Reply
#82
15th Warlock
Totally said:
Tbh, this looks like AMD taking advantage of the situation to unload some inventory that it was otherwise write-off.
Who knows, I mean, they probably have enough Radeon Instinct MI50s GPUs around to make it profitable to sell them in mainstream cards.

You might be onto something.
Posted on Reply
#83
wolf
Performance Enthusiast
The CEO states it's competitive with the RTX2080, not quite beats (like the title of this news post) except in one Vulkan bench and they strategically make it not Wolf 2, Turing cards show very strong perf in wolf 2 especially with VRS.

A shrunk Vega splitting the difference in CU's, double the memory, overclocked to use the same 300w... double the ROPS too yet not a lot to show for it? Well it's a stop gap, a cheap one that cost them little to no R&D and they get to use dies that didn't cut it for instinct products. Not a whoooole lot to get exited about, especially since they are following Nv's pricepoint for that level of performance, without RT/DLSS/VRS(?) and at significantly higher power consumption and thus probably more heat to dissipate.

I do believe however this card will do decently as the res increases, as a vague example goes lets say it might trail a 2080 by 5% @ 1440p but lead a 2080 by 5% at 2160p across more than 3 games. These vendor slides are always whimsical as hell, Nv and AMD both do it.

As always I eagerly await W1z's full review before truly being able to judge the card against the competition.
Posted on Reply
#84
moproblems99
Well, this will likely be my card coming up. We'll see. Wish it was only 8GB and about $600 but whatever, hope they are making a good chunk on them. I can't imagine they are with the cost of HBM.
Posted on Reply
#85
Divide Overflow
I'm thrilled that for at least once it looks like AMD has gotten rid of their squirrel cage blowers on their reference cards.
Posted on Reply
#86
noel_fs
It doesnt have the RTX comparable tech of nvidia, but atleast it has 16gb of hbm that some people will love
Posted on Reply
#87
Fatalfury
so people want AMD 7nm to compete against Nvidia 12nm???

Just because they are ahead of getting contract from TSMC 7nm doesnt mean they are better..

when nvidia releases 7nm..only then will it be real comparsion.
Posted on Reply
#88
INSTG8R
My Custom Title
jabbadap said:
RTX 2080 has 8-pin + 6-pin. Not that 2x8pin means 375W tdp, i.e RTX 2080ti has two of them and TDP for that is 260/250W(FE/ref.).
Sorry I went but the first 2080 review I found which was the Zotac...
Posted on Reply
#89
moproblems99
Fatalfury said:
when nvidia releases 7nm..only then will it be real comparsion.
Not really. It only matters how and what they do.
Posted on Reply
#90
Xzibit
FordGT90Concept said:

AMD is betting the gaming house on Navi.
This day turned out to be fun. Jensen comes off as angry with AMD and Intel in some of his interviews


Lisa isnt responding with hate but she did drop some interesting tid-bits in there
“I think ray tracing is an important technology, and it’s something we’re working on as well, both from a hardware and software standpoint,” Su said. “The most important thing, and that’s why we talk so much about the development community, is technology for technology’s sake is okay, but technology done together with partners who are fully engaged is really important.”
Nvidia has received some criticism from enthusiasts concerning the price of its RTX cards and the relative of lack of game support at present. Su indicated that building a development ecosystem was important.
Later, Su expanded on her thought. “I don’t think we should say that we are ‘waiting,’” Su said, in response to this reporter’s question. “We are deep in development, and that development is concurrent between hardware and software.”
“The consumer doesn’t see a lot of benefit today because the other parts of the ecosystem are not ready,”
Su added.
Might allude to the PlayStations upcoming Gran Turismo recent announcement that it was working on in-house Ray-Tracing.
Posted on Reply
#92
Zubasa
FordGT90Concept said:
Why 60 CU instead of 64 CU? AMD giving themselves wiggle room in terms of manufacturing? Or was it designed with 60 from the start to make room for more memory controllers?
It is not like those extra CUs does anything but make the card run hotter and pull more power.
Just look at Vega 56 vs 64, they perform the same at equal clocks 99% of the time.
Hell, even the Fury vs Fury X is like that.
These chip are mostly Geometry limited in games, throwing more CUs at it won't make it any better.
Posted on Reply
#93
xkm1948
Zubasa said:
It is not like those extra CUs does anything but make the card run hotter and pull more power.
Just look at Vega 56 vs 64, they perform the same at equal clocks 99% of the time.
Hell, even the Fury vs Fury X is like that.
These chip are mostly Geometry limited in games, throwing more CUs at it won't make it any better.
yeah, GCN is too compute focused instead of graphics focused. Sadly very few developer use GCN for compute as CUDA dominates GPU acceleration markets.


But hey, I bet this Radeon 7 mines crypto coins like no tomorrow! 16GB of frreaking HBM2 at 1TB/s bandwidth?? If crypto coins take off again thse cards will sell like gold!
Posted on Reply
#94
INSTG8R
My Custom Title
okidna said:
Let me help you :
NVIDIA GeForce RTX 2080 Founders Edition 8 GB Review

TBH power consumption is literally the last thing I care about. I can run my card at 240w or now with new Wattman 300+ At the end of the day it’s about performance. I’f it took 450W I still wouldn’t care but assuming 2x8. Is automatically gonna be high power consumption is just assumption at this point. My card on release had 3x8 pin but that card no longer exists and is now a 2x8. Bottom line for me is power consumption means very little and actual performance matters a lot. The difference is literally pennies a year so why should I care?
Posted on Reply
#95
FordGT90Concept
"I go fast!1!11!1!"
Xzibit said:
Might allude to the PlayStations upcoming Gran Turismo recent announcement that it was working on in-house Ray-Tracing.
I think that only way that's true is if PlayStation 5 is completely raytraced. It's not impossible...but is unlikely. If it is, NVIDIA is going to lose at its own game. Sony wouldn't be interested in anything that can't do 10+ GR/s for less than 100w. NVIDIA is using in excess of 200w to get that result. Sony would also be concerned about the cost side of it and would reject huge, monolithic dies like Turing.
Posted on Reply
#96
cucker tarlson
xkm1948 said:
AdoredTV apparently is pissed off by Radeon 7 as well. When a diehard AMD fan becomes angry with your GPU and calling it "shit" you know how f*ucked up the product is.


seeing this hack so pissed is literally the best thing about Radeon 7 :roll:

the card is good for now if it can really match 2080 and has 16gb onboard. the problem will be 2020 when 7nm nvidia will have NO competition. NONE.
Posted on Reply
#97
INSTG8R
My Custom Title
cucker tarlson said:
seeing this hack so pissed is literally the best thing about Radeon 7 :roll:

the card is good for now if it can really match 2080 and has 16gb onboard. the problem will be 2020 when 7nm nvidia will have NO competition. NONE.
We know little about Navi that will already be 7nm. Slate AMD all you like but they are ahead of the game on process node.
Posted on Reply
#98
cucker tarlson
INSTG8R said:
We know little about Navi that will already be 7nm. Slate AMD all you like but they are ahead of the game on process node.
which gives them 16nm gen nvidia performance in enthusiast (vega 7 vs 1080ti) and probably nvidia's 12nm gen performance with 7nm navi if it can match the 2060/2070.
Posted on Reply
#99
Sandbo
Depending on the FP64 performance, this can be a hell of a card for compute, though it would have been perfect if it comes with PCI-E 4
Posted on Reply
#100
Assimilator
Unless AMD has somehow f**ked up in a way not yet thought possible, "Radeon 7" will be faster than Vega. Even if this card is only barely competitive with RTX 2080, it's still better than having NO competitor.

16GB of HBM2 is a gimmick that is gonna be hella pricey though; while I doubt AMD will make a loss on this card, there's no way their margins are anywhere near NVIDIA's. So this is just an attempt to save face, and as others have mentioned, it rather points to the fact that Navi is not progressing as well as AMD had hoped.
Posted on Reply
Add your own comment