• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Big Navi Performance Claims Compared to TPU's Own Benchmark Numbers of Comparable GPUs

Fury X - Flopped
Vega - Flopped
Radeon VII - Flopped
AMD did have special events annoucing all of those GPU mind you. Radeon VII in particular flopped so hard it was dead on arrival.

Well considering Navi10 is slightly faster than TU106 (2070), it is a remarkable improvement already when Navi21 is faster than GA104, I will give you that. GA102 however is a different beast.

And Navi21 is cheaper to produce than GA102 ? do you know that AMD pay 9k usd per 7nm wafer at TSMC vs Nvidia pay only 3k usd per 8N wafer at Samsung ? GA102 is dirt cheap to produce, Nvidia has like 60%+ profit margin afterall (last quarter was 66%).

Navi21 best silicon will be reserved for prosumer cards where AMD make more profit. So expect cut down Navi21 competing and beating GA104, which is still good.

None of those GPUs were really designed for gaming though and the Fury X was made because it's the only thing the graphics department could put together under rory read, who wanted to move AMD away from high end graphics all together.

Vega and Radeon VII are based on the Vega architecture which is clearly focused on professional and compute. They have have a large part of the die dedicated to the HBCC, which does nothing for gaming. The only upside to the VII cards was that they still sell for $700 because they do very well in professional workloads.

In essence "Big Navi" will be AMD's return to high end gaming graphics cards in a LONG time.

Black screens of death, no new features, once again marketed features that had to be disabled... How was it decent? It managed to be worse than Turing's.

I suggest you read up on Navi before saying it has no new features: https://www.techpowerup.com/256660/...ecks-ryzen-3000-zen-2-radeon-rx-5000-navi-etc

Among the features is localized L1 cache sharing. Sharders in a group can share content among the L1 caches. This not only increases effective bandwidth and cache size (by removing the need for duplicated data) but reduces latency and cache misses. I would not be surprised if RDNA2 enables sharing across shader groups. Anything that spares the GPU from having to access the L2 cache, which is significantly slower to access than L1.

Given the boost in performance Navi has over prior AMD GPUs and that it brought AMD on par efficiency wise with Nvidia, I would certainly not call it a turing. Just because AMD only released the mid range certainly doesn't make it bad.

Also, what is your source for marketed features that had to be disabled? That was only for Vega as far as I'm aware as I've looked over all the marketing slides AMD have provided.
 
I'm not totally sure if architecture changes can count as features. Sure that's the secret behind the performance uplift, but AMD still lacks variable rate shaders, mesh shaders and ray tracing.
Primitive shaders is also in hardware in RDNA1, same as Vega, was never enabled. Imagine developing a whole new stage in the graphics pipeline, and never using it.
 
TIL AMD fanboys actually believe the "jebait" happened
Perhaps you're young, after all you did just use 'TIL' - which after Googling it, I've just learned means 'Today I learned'. Perhaps you're very insightful, or perhaps naive...

Either way, you'd have to be living under a rock to miss the mind-games that are going on in the industry right now. As such you can choose to believe that Lisa Su just chucked her hand in and said this is all we've got - at an event that was about CPU launches, or you can be open to the possibility that AMD is playing the game, and leaving something in the tank for the main GPU launch event in a few weeks.

If they had nothing to challenge Nvidia, I believe they would have teased a 'wait and see' line. Of course, there is also the possibility that AMD are softening the blow.

In my job, which involves being very analytical, I would fail dismally every time if I made rash decisions without considering all possibilities. Then again, I trust my gut instinct.

I'm not saying you're wrong in terms of the eventual outcome, but I am saying you're wrong to mock and be definitive in your opinion. Time will tell.
 
Last edited by a moderator:
Something I've learned after being disappointed in AMD's marketing for years is that the more quiet they are, the better the product tends to be.
 
Something I've learned after being disappointed in AMD's marketing for years is that the more quiet they are, the better the product tends to be.
Well this one has definitely been kept under wraps and had the absolute minimum of hype in recent memory.
 
That's my only hope for this release, after all the lame products we had to tolerate.
 
Perhaps you're young, after all you did just use 'TIL' - which after Googling it, I've just learned means 'Today I learned'. Perhaps you're very insightful, or perhaps naive...

Either way, you'd have to be living under a rock to miss the mind-games that are going on in the industry right now. As such you can choose to believe that Lisa Su just chucked her hand in and said this is all we've got - at an event that was about CPU launches, or you can be open to the possibility that AMD is playing the game, and leaving something in the tank for the main GPU launch event in a few weeks.

If they had nothing to challenge Nvidia, I believe they would have teased a 'wait and see' line. Of course, there is also the possibility that AMD are softening the blow.

In my job, which involves being very analytical, I would fail dismally every time if I made rash decisions without considering all possibilities. Then again, I trust my gut instinct.

I'm not saying you're wrong in terms of the eventual outcome, but I am saying you're wrong to mock and be definitive in your opinion. Time will tell.

The "jebait" people are refering to is last year when AMD purportedly pretended that they planned to drop prices after the release of the Super cards and "tricked Nvidia" by launching the 5700 at a higher MSRP than they intended.

But no, I'll call their bluff on this one too. They've got nothing impressive in the GPU space. They blew Intel out of the water in HEDT, but big Navi is a flop.
 
The "jebait" people are refering to is last year when AMD purportedly pretended that they planned to drop prices after the release of the Super cards and "tricked Nvidia" by launching the 5700 at a higher MSRP than they intended.

But no, I'll call their bluff on this one too. They've got nothing impressive in the GPU space. They blew Intel out of the water in HEDT, but big Navi is a flop.
It’s a flop based on what? I’m curious what benchmarks and specs you have to share to have come to this conclusion 3 weeks before anyone else really knows anything.
 
None of those GPUs were really designed for gaming though and the Fury X was made because it's the only thing the graphics department could put together under rory read, who wanted to move AMD away from high end graphics all together.

Vega and Radeon VII are based on the Vega architecture which is clearly focused on professional and compute. They have have a large part of the die dedicated to the HBCC, which does nothing for gaming. The only upside to the VII cards was that they still sell for $700 because they do very well in professional workloads.

In essence "Big Navi" will be AMD's return to high end gaming graphics cards in a LONG time.

Yeah Navi21 selling at 600usd +-50usd can already be called high end gaming.
The elephant in the room here is AMD would like to make profit that could later turn into R&D budget, AMD can't do that if they go into a price war against Nvidia (whom already maintain godly profit margin). Therefore the only way AMD can do is slot their GPU into the gap between Nvidia price brackets, selling as many as they could while maintain as high profit margin as they can (5700/5700XT ring any bells ? ) .

Nvidia already knew 3080/3090 would have no direct competition; the 3070 is the one that Nvida is prepping to counter Big Navi. If AMD decide to price Navi21 at 500usd then Nvidia would have to lower the 3070 price accordingly, though I doubt AMD would do that.

Overall Nvidia is touting 3080 as their flagship gaming GPU, there is no shame for Navi21 being slightly slower. Remember that Nvidia only need TU104 to beat AMD flagship before.
 
It’s a flop based on what? I’m curious what benchmarks and specs you have to share to have come to this conclusion 3 weeks before anyone else really knows anything.

Mis-informed opinion, nothing more. He's a fool for making such a sweeping judgement at this early point in time. Not worth the time of day.
 
Mis-informed opinion, nothing more. He's a fool for making such a sweeping judgement at this early point in time. Not worth the time of day.
I mean TPU shouldn’t even be using their own charts for comparison. Different API, Quality settings and not the benchmark like AMD used It’s just adds to more FUD and speculation.
 
It’s a flop based on what? I’m curious what benchmarks and specs you have to share to have come to this conclusion 3 weeks before anyone else really knows anything.

What do you have to say it isn't?

I don't have any benchmarks to say Zen3 will beat TGL either but you don't seem to have a problem with that speculation
 
What do you have to say it isn't?

I don't have any benchmarks to say Zen3 will beat TGL either but you don't seem to have a problem with that speculation
So we‘ve gone from your flagrant speculation to shifting focus to your projection at speculation of a completely different product in a completely different hardware space with about as much info to go on as your Navi “flop“ Pick a lane....
 
Based on the IPC numbers AMD gave it's a reasonable guesstimate, TGL will win some while zen3 should win most. CPU benchmarks are mostly predictable, with very few outliers, & given it's the same x86 based uarch I doubt we'll see too many surprises. With GPU you can already see how Ampere does vs Turing at lower resolution, even a minor change can make huge difference in performance & at different resolutions there's no guarantee the same trend will apply!
 
Last edited:
Based on the IPC numbers AMD gave it's a reasonable guesstimate, TGL will wone some while zen3 should win most. CPU benchmarks are mostly predictable, with very outliers, & given it's the same x86 uarch I doubt we'll see to many surprises. With GPU you can already see how Ampere does vs Turing at lower resolution, even a minor change can make huge difference in performance & at different resolutions!
But we’ve moved the goal posts to unrealeased mobile parts now?...
 
I'm assuming he's talking about absolute IPC based numbers, so ISO performance & it's only a fair comparison in that regard. If we want to talk apples to apples, mobile SoC, then it's a hard sell because zen3 mobile will have much less cache & probably monolithic die. Unless you meant he's changing the tack from GPU to CPU, in which case you're right.
 
I'm assuming he's talking about absolute IPC based numbers, so ISO performance & it's only a fair comparison in that regard. If we want to talk apples to apples, mobile SoC, then it's a hard sell because zen3 mobile will have much less cache & probably monolithic die.
But he came in ranting with authority that Navi is a flop. I don’t give a flying fig about the mobile space or speculation on another unreleased product. I wanna hear why Navi is a flop not how Intel might finally pull themselves out of the hole they’ve dug for themselves in a hardware space completely unrelated to the topic at hand.
 
As many have written, AMD hype was minimal to none this time. Navi10 was an unknown quantity until its official reveal as Big Navi is and the 1st one was better than expected. A repeat maybe?
 
Can you live with their trash drivers though?

If you can live with Nvidia's trash drivers, you can live with AMD's trash drivers.
 
Can you live with their trash drivers though?
Cant say I’ve had any show stopping drivers on my 5700XT and I have a Freesync HDR monitor to boot so I have all the variables. Just Enhanced Sync being broken is all that comes to mind
 
Can you live with their trash drivers though?
Never used anything other than ATI/AMD since rage fury - so i do live fine with them but not more FanATIc than I did consider 3090 but due to lack of availability i did decide to waith for big Navi before makeing a choice. Using Radeon VII I do know about poor drivers - on other side theres been the same issues over the years with the green cards.
 
I'm not totally sure if architecture changes can count as features. Sure that's the secret behind the performance uplift, but AMD still lacks variable rate shaders, mesh shaders and ray tracing.
Primitive shaders is also in hardware in RDNA1, same as Vega, was never enabled. Imagine developing a whole new stage in the graphics pipeline, and never using it.

RDNA2 already has all of those features confirmed.

RDNA2 also supports primitive sharders out of the box as well.

All I have to say is wait for the benchmarks.
 
RDNA2 already has all of those features confirmed.

RDNA2 also supports primitive sharders out of the box as well.

All I have to say is wait for the benchmarks.
Basically to be compliant with DX12 Ultimate or whatever it will be called they have to have most of that list.
 
Makes me wonder was what the whole point of releasing RDNA1,something with feature parity with Vega, so already obsolete at launch.
Primitive shaders are dead by now. Both Microsoft and Khronos went for Mesh shaders. Two generations of products marketed to have something that never worked and never will.
 
Back
Top