• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Announces the Radeon VII Graphics Card: Beats GeForce RTX 2080

the problem will be 2020 when 7nm nvidia will have NO competition. NONE.

Since you seem to know the future, can we get together and discuss the lottery? If you aren't quite that talented in the future, can we start at the Super Bowl then? We can discuss how to split the profits.

This a a Vega 20 card. The Radeon instinct card was already at 300W so... this has higher clocks...

Also on a smaller node, no?
 
HBM is expensive and 16GB is overkill. I would rather see it ship with 8GB and a lower price. Can someone point me to a benchmark of a game using more than 8GB of vram?

Only one game, Battlefield 5 running DX12 at 4K.
 
No I own one so I know exactly how much power it uses. I don’t go by outdated numbers you’re using and I’ve tested both BIOS many times and the performance difference is non existent. Oh and the “HOT” BIOS tops out a 276W so again your 300W is outdated. I’ve never even made ANY attempt at undervolting.Yeah I can easily push it over 300(Ran the “hot” BIOS this morning with the clocks at 1750/1000 50% power limit and hit 347W peak so nobodies saying it can’t eat up power. But please drop your 300W assumptions they’re wrong and your opinion not fact at this point.
First You say it's never over 240w,now it's 280w. Measure that at 4k no fps limit.It's impossible all reviews lie.
 
Wasn't less ROPs supposedly one of the biggest drawback of Fury & Vega? If you cut the ROPs in half then what would happen to the performance, surely none wants Vega64 respun?
Yes, they wouldn't have doubled the ROP count if they didn't think it was beneficial.
 
First You say it's never over 240w,now it's 280w. Measure that at 4k no fps limit.It's impossible all reviews lie.
You brought up the BIOSs those are the numbers from both. Reading comprehension? Still neither over 300 unless I make it. I also told you how high I can make it go which is basically slamming it to the wall to do it... anything else you need clarification on to continually disprove your 300W BS?

Edit: Not seeing 300 here either...
powelimit.JPG
 
Last edited:
Equating process node to performance is a fools game.. Turing is HUGE! Makes my Vega look small...

Edit: I guess me equating die size is no better but point still stands.

Yeah, I don't get why people care what nanometer the chip is made with. It could be one nanometer for all I care, if it's hotter, slower, more power hungry than something that's 25nm, do I give two shits?
 
a 550mm2 12nm card from nvidia runs at 230W with RT features, a 7nm 330mm2 Vega can barely match it at 300W.
They went with old arch, doubling ROPs and dropping some CUs and that alone got them basically perf/chip size parity.

Nobody knows power consumption figures at this point.
 
Yeah that guy is just pathetic. I thought I was reading Trump quotes for a second there :D

He's just saying what everyone in here is saying. The card is not impressive at all.
 
They went with old arch, doubling ROPs and dropping some CUs and that alone got them basically perf/chip size parity.

Nobody knows power consumption figures at this point.
I thought I saw 300w mentioned in the OP. more ROPs is what they should've done a long time ago.It's all in the clock speeds imo, at 7nm they should be able to clock higher than nvidia's 16nm.
 
True, but do they need 128 ROPs in a desktop graphics card on this performance level?



A redesign isn't really necessary, just remove two of the HBM2 stacks from the GPU and disable two of the memory controllers(which would also disable half of the ROPs if I'm not mistaken).

But then you have the Vega 64...

FC5 with the texture pack as well.

Well I mean stock. You can go download 4K Skyrim textures that load up the video ram but that's not the norm for that game.
 
But then you have the Vega 64...

Exactly. The fact that AMD straight up doubled Vega 20's ROPs compared to Vega 10, rather than going to something like 96 ROPS/12GB, strongly suggests that they are well aware Vega 10's performance is ROP-limited.
 
hold your horses, wait for real reviews, if owner and amd CEO say its same level than rtx 2080, you should think twice....she compared it FE version of coz.
and if radeon VII needs 3 fans for reference model, whta AIB can offer?

looks it high oc'd...lets see.

what i heard ,rtx 2080 is faster than radeon VII,easily,and if we compare efficiency, rtx 2080 crushed radeon VII.
radeon VII,even it has 7nm line arhcitech are not engineer hurray hardware, faraway it.
amd fans dissapoint ...again.

well,wait for test,few week and see yourself.

with 7nm line arhitech and still over 300 gaming powerdraw,and still loose rtx 2080 and clear rtx 2080 ti its NOT and cant be editor choice' its fact.

amd has all aces their hands,but result is average.

well,lets see shwn next nvidia release their next rtx 300 series,its for sure 7nm line builded also, and its NOT go ANY model over 250W!

bad work amd,again.
 
Interesting bit of news (again) from Linus... he was looking at the VII and noticed it was whisper quiet EVEN while running a game. The fans were not running at high speed, and those do not look like particularly great fan blades...

Intriguing...

 
Honestly radeon VII sucks IMO. It has the same problem as turing, in that the % perf. boost in no way reflects the price.

You can get a vega 64 for $400, $500 if you dont want to wait and just pick one up anywhere.

OR, you can get 30% more performance at $800. Which makes no sense in terms of perf/$. And much like how I think turing sucks due to perf/$ compared to pascal, I think radeon VII sucks compared to vega. This chip should be $500, not $800.

This also doesnt look good for AMD's next generation IMO. They need 7nm to meet the performance and power consumption of 14nm turing. When turing inevitably is ported to 7nm, AMD will once again be several steps behind.
 
I think they're both staring at the same problem (7 nm) and reached the same conclusion (prices have to go up). It may get worse with each coming process generation, and they'll come slower. Blame physics.
 
I think they're both staring at the same problem (7 nm) and reached the same conclusion (prices have to go up). It may get worse with each coming process generation, and they'll come slower. Blame physics.
7nm isnt THAT expensive. To justify these prices, 7nm would have to be something like 4x-5x as expensive as 14nm, and I dont see that being possible, even if it is newer tech.

Sure, prices probably wont reach early 2010s levels anytime soon, but the current jacked up prices can solely be laid at the feet of no competition. Nvidia has had sole control of the market for 3 years now, and AMD still isnt releasing much in the way of competition, seeing as their 7nm chip manages to be less efficient then a 14nm nvidia chip.

The moment AMD bothers to compete with Navi, prices will magically begin to fall as team red and team green begin to undercut each other. Much like the CPU space, where the ability to economically make an 8 core intel chip on the mainstream socket went from impossible to publicly available once ryzen hit the scene.
 
Look at the second reply here:
https://www.quora.com/How-much-does-it-cost-to-tapeout-a-28-nm-14-nm-and-10-nm-chip

AMD showed a similar chart itself. Costs are growing exponentially, not linearly. 7 nm likely costs double what 16-12 nm did. Sprinkle in the fact that HBM2 and GDDR6 supply is limited, and you got a perfect storm of higher prices.
So 2x the price of silicon, given that silicon is not even 50% the cost of a full GPU, somehow results in graphics cards being 100% more expensive? This also doesnt take into account the fact that the same core count GPU would have a much smaller die with 7nm, meaning more dies per wafer. Once the process matures, complex GPUs will be easier/cheaper to mass produce with limited wafers. If 7nm is a full proper node shrink, then you can build roughly 4x the dies on the same wafer area.

HBM2 has been in "short supply" for over 18 months now, that was the excuse when vega 64 came out. So either that excuse is complete bunk, or "short supply" is actually "normal supply" and AMD really needs to stop using it. Either way it is a poor decision by AMD, resulting in a lack of proper competition leading to massive price rises.

Go look at nvidia's quarterly earnings reports. 47% Y/Y increase in net income for Q3 2018 for instance, despire 14nm being 3X more expensive then 28nm. If the price increases mattered as much as you say they did, that net income would not be so freaking high. Nvidia and AMD are taking advantage of the market to pump up prices and net incomes through the roof, the higher prices would result in a marginal cost increase for GPUs, not the massive doubling we have seen the last 3 years. This is only possible because Nvidia has no competition, and now AMD is trying to get a piece of that pie.

If we had proper GPU competition, prices would likely be 30-40% lower then they are now. The high GPU prices in the current market are a direct result of Nvidia wanting to line their pockets with fatter margins, and their financial results show that quite blatantly with record revenues, record net profits, and higher cash dividends and stock buybacks. The "rising costs" excuse is simple gaslighting to mislead people into accepting higher prices. While it is highly unlikely wee will ever see $300 x80 GPUs again, they shouldnt cost anywhere near $800.
 
The costs are higher across the board: memory, GPU, PCB (especially integration of components on to it), and HSFs (AMD and NVIDIA both went triple fan).

You're forgetting that smaller nodes also mean more defects/higher failure rates. Why do you think AMD went full chiplet design for 7nm Ryzen? It's a cost-containing measure.

The bulk of HBM2 goes into cards that sell for many thousands of dollars (MI25 apparently still goes for $4900 or more). Radeon Vega cards get whatever is left.

AMD put the squeeze on Intel because of the chiplet design that let them make processors cheaper. Chiplet isn't something that really works for GPUs. Navi's original stated goal was to do exactly that; however, Navi being chiplet in design was apparently descoped years ago. NVIDIA hasn't even dabbled in chiplet as far as I know. Without chiplet GPU designs, costs will only get worse as process shrinks.
 
Back
Top