Friday, October 9th 2020

AMD Big Navi Performance Claims Compared to TPU's Own Benchmark Numbers of Comparable GPUs

AMD in its October 8 online launch event for the Ryzen 5000 "Zen 3" processors, provided a teaser of the company's next flagship graphics card slotted in the Radeon RX 6000 series. This particular SKU has been referred to by company CEO Lisa Su as "Big Navi," meaning it could be the top part from AMD's upcoming client GPU lineup. As part of the teaser, Su held up the reference design card, and provided three performance numbers of the card as tested on a machine powered by a Ryzen 9 5900X "Zen 3" processor. We compared these performance numbers, obtained at 4K UHD, with our own testing data for the games, to see how the card compares to other current-gen cards in its class. Our testing data for one of the games is from the latest RTX 30-series reviews, find details of our test bed here. We obviously have a different CPU since the 5900X is unreleased, but use the highest presets in our testing.

With "Borderlands 3" at 4K, with "badass" performance preset and DirectX 12 renderer, AMD claims a frame-rate of 61 FPS. We tested the game with its DirectX 12 renderer in our dedicated performance review (test bed details here). AMD's claimed performance ends up 45.9 percent higher than that of the GeForce RTX 2080 Ti as tested by us, which yields 41.8 FPS on our test bed. The RTX 3080 ends up 15.24 percent faster than Big Navi, with 70.3 FPS. It's important to note here that AMD may be using a different/lighter test scene than us, since we don't use internal benchmark tools of games, and design our own test scenes. It's also important to note that we tested Borderlands 3 with DirectX 12 only in the game's launch-day review, and use the DirectX 11 renderer in our regular VGA reviews.
With Gears 5, AMD claims performance of 73 FPS at 4K, with its DirectX 12 renderer, with the "Ultra" preset. This number ends up 16.24 percent faster than that of the RTX 2080 Ti, which scores 62.8 FPS on our test bed. The RTX 3080 is 15.61 percent faster than the AMD card, at 84.4 FPS.

Call of Duty: Modern Warfare was never added to our VGA review test selection, but we tested the game separately at launch (find its test bed information here). In this testing, we found the RTX 2080 Ti to score 77.9 FPS at Ultra Settings, with RTX-off. In comparison, AMD claims Big Navi scores 88 FPS, making it 12.96 percent faster.

We know this is a very coarse and unscientific way to compare AMD's numbers to ours, and AMD has probably cherry-picked games that are most optimized to its GPUs, but it lends plausibility to a theory that Big Navi may end up comparable to the RTX 2080 Ti, and trade blows with the upcoming RTX 3070, which NVIDIA claims outperforms the RTX 2080 Ti. The RX 6000 "Big Navi" ends up with a geometric mean of 21% higher frame-rates than the RTX 2080 Ti in these three tests, which would imply almost double the performance of the RX 5700 XT. Big Navi is rumored to feature double the CUs as the the RX 5700 XT, so the claims somewhat line up.
Add your own comment

262 Comments on AMD Big Navi Performance Claims Compared to TPU's Own Benchmark Numbers of Comparable GPUs

#151
GoldenX
Something I've learned after being disappointed in AMD's marketing for years is that the more quiet they are, the better the product tends to be.
Posted on Reply
#152
INSTG8R
Vanguard Beta Tester
GoldenX
Something I've learned after being disappointed in AMD's marketing for years is that the more quiet they are, the better the product tends to be.
Well this one has definitely been kept under wraps and had the absolute minimum of hype in recent memory.
Posted on Reply
#153
GoldenX
That's my only hope for this release, after all the lame products we had to tolerate.
Posted on Reply
#154
Blueberries
beedoo
Perhaps you're young, after all you did just use 'TIL' - which after Googling it, I've just learned means 'Today I learned'. Perhaps you're very insightful, or perhaps naive...

Either way, you'd have to be living under a rock to miss the mind-games that are going on in the industry right now. As such you can choose to believe that Lisa Su just chucked her hand in and said this is all we've got - at an event that was about CPU launches, or you can be open to the possibility that AMD is playing the game, and leaving something in the tank for the main GPU launch event in a few weeks.

If they had nothing to challenge Nvidia, I believe they would have teased a 'wait and see' line. Of course, there is also the possibility that AMD are softening the blow.

In my job, which involves being very analytical, I would fail dismally every time if I made rash decisions without considering all possibilities. Then again, I trust my gut instinct.

I'm not saying you're wrong in terms of the eventual outcome, but I am saying you're wrong to mock and be definitive in your opinion. Time will tell.
The "jebait" people are refering to is last year when AMD purportedly pretended that they planned to drop prices after the release of the Super cards and "tricked Nvidia" by launching the 5700 at a higher MSRP than they intended.

But no, I'll call their bluff on this one too. They've got nothing impressive in the GPU space. They blew Intel out of the water in HEDT, but big Navi is a flop.
Posted on Reply
#155
INSTG8R
Vanguard Beta Tester
Blueberries
The "jebait" people are refering to is last year when AMD purportedly pretended that they planned to drop prices after the release of the Super cards and "tricked Nvidia" by launching the 5700 at a higher MSRP than they intended.

But no, I'll call their bluff on this one too. They've got nothing impressive in the GPU space. They blew Intel out of the water in HEDT, but big Navi is a flop.
It’s a flop based on what? I’m curious what benchmarks and specs you have to share to have come to this conclusion 3 weeks before anyone else really knows anything.
Posted on Reply
#156
nguyen
evernessince
None of those GPUs were really designed for gaming though and the Fury X was made because it's the only thing the graphics department could put together under rory read, who wanted to move AMD away from high end graphics all together.

Vega and Radeon VII are based on the Vega architecture which is clearly focused on professional and compute. They have have a large part of the die dedicated to the HBCC, which does nothing for gaming. The only upside to the VII cards was that they still sell for $700 because they do very well in professional workloads.

In essence "Big Navi" will be AMD's return to high end gaming graphics cards in a LONG time.
Yeah Navi21 selling at 600usd +-50usd can already be called high end gaming.
The elephant in the room here is AMD would like to make profit that could later turn into R&D budget, AMD can't do that if they go into a price war against Nvidia (whom already maintain godly profit margin). Therefore the only way AMD can do is slot their GPU into the gap between Nvidia price brackets, selling as many as they could while maintain as high profit margin as they can (5700/5700XT ring any bells ? ) .

Nvidia already knew 3080/3090 would have no direct competition; the 3070 is the one that Nvida is prepping to counter Big Navi. If AMD decide to price Navi21 at 500usd then Nvidia would have to lower the 3070 price accordingly, though I doubt AMD would do that.

Overall Nvidia is touting 3080 as their flagship gaming GPU, there is no shame for Navi21 being slightly slower. Remember that Nvidia only need TU104 to beat AMD flagship before.
Posted on Reply
#157
PooPipeBoy
INSTG8R
It’s a flop based on what? I’m curious what benchmarks and specs you have to share to have come to this conclusion 3 weeks before anyone else really knows anything.
Mis-informed opinion, nothing more. He's a fool for making such a sweeping judgement at this early point in time. Not worth the time of day.
Posted on Reply
#158
INSTG8R
Vanguard Beta Tester
PooPipeBoy
Mis-informed opinion, nothing more. He's a fool for making such a sweeping judgement at this early point in time. Not worth the time of day.
I mean TPU shouldn’t even be using their own charts for comparison. Different API, Quality settings and not the benchmark like AMD used It’s just adds to more FUD and speculation.
Posted on Reply
#159
Blueberries
INSTG8R
It’s a flop based on what? I’m curious what benchmarks and specs you have to share to have come to this conclusion 3 weeks before anyone else really knows anything.
What do you have to say it isn't?

I don't have any benchmarks to say Zen3 will beat TGL either but you don't seem to have a problem with that speculation
Posted on Reply
#160
INSTG8R
Vanguard Beta Tester
Blueberries
What do you have to say it isn't?

I don't have any benchmarks to say Zen3 will beat TGL either but you don't seem to have a problem with that speculation
So we‘ve gone from your flagrant speculation to shifting focus to your projection at speculation of a completely different product in a completely different hardware space with about as much info to go on as your Navi “flop“ Pick a lane....
Posted on Reply
#161
R0H1T
Based on the IPC numbers AMD gave it's a reasonable guesstimate, TGL will win some while zen3 should win most. CPU benchmarks are mostly predictable, with very few outliers, & given it's the same x86 based uarch I doubt we'll see too many surprises. With GPU you can already see how Ampere does vs Turing at lower resolution, even a minor change can make huge difference in performance & at different resolutions there's no guarantee the same trend will apply!
Posted on Reply
#162
INSTG8R
Vanguard Beta Tester
R0H1T
Based on the IPC numbers AMD gave it's a reasonable guesstimate, TGL will wone some while zen3 should win most. CPU benchmarks are mostly predictable, with very outliers, & given it's the same x86 uarch I doubt we'll see to many surprises. With GPU you can already see how Ampere does vs Turing at lower resolution, even a minor change can make huge difference in performance & at different resolutions!
But we’ve moved the goal posts to unrealeased mobile parts now?...
Posted on Reply
#163
R0H1T
I'm assuming he's talking about absolute IPC based numbers, so ISO performance & it's only a fair comparison in that regard. If we want to talk apples to apples, mobile SoC, then it's a hard sell because zen3 mobile will have much less cache & probably monolithic die. Unless you meant he's changing the tack from GPU to CPU, in which case you're right.
Posted on Reply
#164
INSTG8R
Vanguard Beta Tester
R0H1T
I'm assuming he's talking about absolute IPC based numbers, so ISO performance & it's only a fair comparison in that regard. If we want to talk apples to apples, mobile SoC, then it's a hard sell because zen3 mobile will have much less cache & probably monolithic die.
But he came in ranting with authority that Navi is a flop. I don’t give a flying fig about the mobile space or speculation on another unreleased product. I wanna hear why Navi is a flop not how Intel might finally pull themselves out of the hole they’ve dug for themselves in a hardware space completely unrelated to the topic at hand.
Posted on Reply
#165
jamexman
jesdals
I could live with that if the price is belowe the 3080
Can you live with their trash drivers though?
Posted on Reply
#166
HD64G
As many have written, AMD hype was minimal to none this time. Navi10 was an unknown quantity until its official reveal as Big Navi is and the 1st one was better than expected. A repeat maybe?
Posted on Reply
#167
ZoneDymo
jamexman
Can you live with their trash drivers though?
If you can live with Nvidia's trash drivers, you can live with AMD's trash drivers.
Posted on Reply
#168
INSTG8R
Vanguard Beta Tester
jamexman
Can you live with their trash drivers though?
Cant say I’ve had any show stopping drivers on my 5700XT and I have a Freesync HDR monitor to boot so I have all the variables. Just Enhanced Sync being broken is all that comes to mind
Posted on Reply
#169
jesdals
jamexman
Can you live with their trash drivers though?
Never used anything other than ATI/AMD since rage fury - so i do live fine with them but not more FanATIc than I did consider 3090 but due to lack of availability i did decide to waith for big Navi before makeing a choice. Using Radeon VII I do know about poor drivers - on other side theres been the same issues over the years with the green cards.
Posted on Reply
#170
evernessince
GoldenX
I'm not totally sure if architecture changes can count as features. Sure that's the secret behind the performance uplift, but AMD still lacks variable rate shaders, mesh shaders and ray tracing.
Primitive shaders is also in hardware in RDNA1, same as Vega, was never enabled. Imagine developing a whole new stage in the graphics pipeline, and never using it.
RDNA2 already has all of those features confirmed.

RDNA2 also supports primitive sharders out of the box as well.

All I have to say is wait for the benchmarks.
Posted on Reply
#171
INSTG8R
Vanguard Beta Tester
evernessince
RDNA2 already has all of those features confirmed.

RDNA2 also supports primitive sharders out of the box as well.

All I have to say is wait for the benchmarks.
Basically to be compliant with DX12 Ultimate or whatever it will be called they have to have most of that list.
Posted on Reply
#172
GoldenX
Makes me wonder was what the whole point of releasing RDNA1,something with feature parity with Vega, so already obsolete at launch.
Primitive shaders are dead by now. Both Microsoft and Khronos went for Mesh shaders. Two generations of products marketed to have something that never worked and never will.
Posted on Reply
#173
INSTG8R
Vanguard Beta Tester
GoldenX
Makes me wonder was what the whole point of releasing RDNA1,something with feature parity with Vega, so already obsolete at launch.
Primitive shaders are dead by now. Both Microsoft and Khronos went for Mesh shaders. Two generations of products marketed to have something that never worked and never will.
AMD is always chasing the latest features but sometimes they seem to lose focus.
Posted on Reply
#174
Blueberries
INSTG8R
So we‘ve gone from your flagrant speculation to shifting focus to your projection at speculation of a completely different product in a completely different hardware space with about as much info to go on as your Navi “flop“ Pick a lane....
No, I'm speculating just like yourself but for some reason you think the burden of proof is on me to disprove your theory that they're hiding a more powerful GPU beyond a curtain. You're the one throwing flagrant speculation and I'm being realistic.
Posted on Reply
#175
cueman
just my re-search ,calculation and sharp compare and etc etc.. info


well,i cant see that rx 6900 xt can reach even close rtx 3080 performance..i say rtx 3080 10/20gb is 20-25% faster.


amd use it 'teaser' video 5900xt 12-core cpu ,what ,if rtx 3080 get it for test,helps much just rtx 3080/3090 gpus, bcoz one of limited/bottleneckthouse gpus are cpu ,and 5900 xt 12-core helps.

also rx 6900xt have 16gb memory ,and its helps just there high resolution gaming, like amd 'teaser' video' test ,4K resolution.



still different is quite clear, even amd own cherry picked games...what is odd choice from them, bcoz almost no1 test site NOT use thouse...hmm.

lol unpossible to compare. that amd want..or why?


come on amd.?.lol




summarum


rtx 3080 20Gb gpu with amd 5900xt gpu or intel Rocket Lake 11th generation cpu,( coming btw Q1/2021, helps, i dare say, 5-10% more.

so different is between rx 6900xt vs rtx 3080 10GB/20GB is between 20-25%


meaning practical example rx 6900 50fps VS rtx 3080 60 fps..

i feel i do favor for amd big navi that omen..lol


we,let see.





finaly, i cant belive amd 300W tdp for rx 6900 xt.. hmm well sure amd can say tdp is 300W ,but its unpossible for example gaming session....

bcoz even single rx 5700 xt, its fastest variant, sapphire nitro+ SE AIB gpu, official tdp is 285w and measured 'average gaming' (yes, Techpowerup test) also, 285W

btw, peak was 296W ...wow.

hm,its btw,more than example rtx 2080 ti AIB versions,example MSI rtx 2080 ti Lightning eat watts, its have average gaming tdp 279W....6w less than rx 5700 sapphire nitro+ SE AIB gpu...


but,btw again, rtx 2080 ti Lightning is 44% faster than rx 5700 sapphire nitro+ SE AIB gpu,this for 4K gaming...hmm

(Techpowerup test) tx again...



so, i dont think that rx 6900 xt what practical is 2x5700xt for CF mode can do jod same values.....lol no way.


we must remembe,there is no different rdna2 and rdna1 for moust important issues

and i mean both use still 7nm core and both use still also gddr6 memory.

also RT support eat juice, 25w?


i say ..hmm. i dare say.. 360W average gaming, for rx 6900 xt, peak near 400w, this IF rx 6900xt get even close ,that 20% of rtx 3080 speed.

this bcoz there is several test for internet with rx 5000 gpus CF modes.

and example when testing CF mode rx 5600xt and rx5700, they measure tdp 464W, yes 464W....

gogole and you find.

if rx 6900 xt loose about 35% i belive 300w tdp.


so much power must have these days top gpus that it need current alot.


so,we must remembe that rx 5700 xt is already 7nm gpu, so amd cant do much there power saving,

but rtx 3080 is 8nm gpu and its little brothers rtx 2000 series is 12nm maded,different is big ,50% smaller die,its alot!


so when we know that rtx 3080 have amazing performance it tdp and average gaming measure watts eat, 303W, is excellent.


well ,we seen soon.



rtx 3000 series really earn all its glory.yes



also, when rtx 3070 and rtx 3060 ti coming (heard that 3070 Ti is cancelled..?)

i bet thouse gpus oc'd very well,and my calcuting at least rtx 3070 AIB Lighning is hard case rx 6900 xt...maybe, but sure is,its alot cheaper. 200$ i guess


well ,let see that too soon.




p.s. rtx 3090 is it own category, but its make just 8K gaming,rtx 3080 10gb & 20GB for 4K gaming ,rtx 3060 ti and rtx 3070/super FHD and WHD gaming, rtx 3070 Super (16gb) also 4K gaming.


3rd gpu:

but, where is intel Xe gpu!?

it is great to see 3 gpus battle line!



rs 6900 xt price:

i cant belive that amd cant sell rx 6900 xt under rtx 3080 price, little more, i say 699-799$



notice...

amd need cash,moore than nvidia.


its cant continue forever overdraft, bcoz its zero/2-3% profit price politics.





all great end of weekend!
Posted on Reply
Add your own comment