Friday, October 9th 2020

AMD Big Navi Performance Claims Compared to TPU's Own Benchmark Numbers of Comparable GPUs
AMD in its October 8 online launch event for the Ryzen 5000 "Zen 3" processors, provided a teaser of the company's next flagship graphics card slotted in the Radeon RX 6000 series. This particular SKU has been referred to by company CEO Lisa Su as "Big Navi," meaning it could be the top part from AMD's upcoming client GPU lineup. As part of the teaser, Su held up the reference design card, and provided three performance numbers of the card as tested on a machine powered by a Ryzen 9 5900X "Zen 3" processor. We compared these performance numbers, obtained at 4K UHD, with our own testing data for the games, to see how the card compares to other current-gen cards in its class. Our testing data for one of the games is from the latest RTX 30-series reviews, find details of our test bed here. We obviously have a different CPU since the 5900X is unreleased, but use the highest presets in our testing.
With "Borderlands 3" at 4K, with "badass" performance preset and DirectX 12 renderer, AMD claims a frame-rate of 61 FPS. We tested the game with its DirectX 12 renderer in our dedicated performance review (test bed details here). AMD's claimed performance ends up 45.9 percent higher than that of the GeForce RTX 2080 Ti as tested by us, which yields 41.8 FPS on our test bed. The RTX 3080 ends up 15.24 percent faster than Big Navi, with 70.3 FPS. It's important to note here that AMD may be using a different/lighter test scene than us, since we don't use internal benchmark tools of games, and design our own test scenes. It's also important to note that we tested Borderlands 3 with DirectX 12 only in the game's launch-day review, and use the DirectX 11 renderer in our regular VGA reviews.With Gears 5, AMD claims performance of 73 FPS at 4K, with its DirectX 12 renderer, with the "Ultra" preset. This number ends up 16.24 percent faster than that of the RTX 2080 Ti, which scores 62.8 FPS on our test bed. The RTX 3080 is 15.61 percent faster than the AMD card, at 84.4 FPS.
Call of Duty: Modern Warfare was never added to our VGA review test selection, but we tested the game separately at launch (find its test bed information here). In this testing, we found the RTX 2080 Ti to score 77.9 FPS at Ultra Settings, with RTX-off. In comparison, AMD claims Big Navi scores 88 FPS, making it 12.96 percent faster.
We know this is a very coarse and unscientific way to compare AMD's numbers to ours, and AMD has probably cherry-picked games that are most optimized to its GPUs, but it lends plausibility to a theory that Big Navi may end up comparable to the RTX 2080 Ti, and trade blows with the upcoming RTX 3070, which NVIDIA claims outperforms the RTX 2080 Ti. The RX 6000 "Big Navi" ends up with a geometric mean of 21% higher frame-rates than the RTX 2080 Ti in these three tests, which would imply almost double the performance of the RX 5700 XT. Big Navi is rumored to feature double the CUs as the the RX 5700 XT, so the claims somewhat line up.
With "Borderlands 3" at 4K, with "badass" performance preset and DirectX 12 renderer, AMD claims a frame-rate of 61 FPS. We tested the game with its DirectX 12 renderer in our dedicated performance review (test bed details here). AMD's claimed performance ends up 45.9 percent higher than that of the GeForce RTX 2080 Ti as tested by us, which yields 41.8 FPS on our test bed. The RTX 3080 ends up 15.24 percent faster than Big Navi, with 70.3 FPS. It's important to note here that AMD may be using a different/lighter test scene than us, since we don't use internal benchmark tools of games, and design our own test scenes. It's also important to note that we tested Borderlands 3 with DirectX 12 only in the game's launch-day review, and use the DirectX 11 renderer in our regular VGA reviews.With Gears 5, AMD claims performance of 73 FPS at 4K, with its DirectX 12 renderer, with the "Ultra" preset. This number ends up 16.24 percent faster than that of the RTX 2080 Ti, which scores 62.8 FPS on our test bed. The RTX 3080 is 15.61 percent faster than the AMD card, at 84.4 FPS.
Call of Duty: Modern Warfare was never added to our VGA review test selection, but we tested the game separately at launch (find its test bed information here). In this testing, we found the RTX 2080 Ti to score 77.9 FPS at Ultra Settings, with RTX-off. In comparison, AMD claims Big Navi scores 88 FPS, making it 12.96 percent faster.
We know this is a very coarse and unscientific way to compare AMD's numbers to ours, and AMD has probably cherry-picked games that are most optimized to its GPUs, but it lends plausibility to a theory that Big Navi may end up comparable to the RTX 2080 Ti, and trade blows with the upcoming RTX 3070, which NVIDIA claims outperforms the RTX 2080 Ti. The RX 6000 "Big Navi" ends up with a geometric mean of 21% higher frame-rates than the RTX 2080 Ti in these three tests, which would imply almost double the performance of the RX 5700 XT. Big Navi is rumored to feature double the CUs as the the RX 5700 XT, so the claims somewhat line up.
262 Comments on AMD Big Navi Performance Claims Compared to TPU's Own Benchmark Numbers of Comparable GPUs
Another thing of note is TDP. I'm not a huge fan of anything north of 220-250W. Air will struggle with that.
Ironically, the best handle on future proofing is gained by looking back. History repeats.
They can afford this tactic because they rule desktop market.
New games will widen the gap between old and new more and more.
As I've told you before, the fact remains that increasing VRAM usage nearly always requires both more bandwidth and computational performance in order to utilize it, which is why you usually will hit other bottlenecks long before VRAM. Not really, that's combined system RAM and VRAM, so it's not a fair comparison. No objection there. ~250W seems to be the spot where cooling becomes a hassle. Exactly how many years do you have to wait then?
The fact is Nvidia retains legacy support, AMD drops it.
AMD can't even get their current driver support right, how can they be better at legacy support? Yeah right.
I'm sure those old Radeon 200/300 will finally be unleashed any day now. :rolleyes:
Driver tricks will make RTX 3000 more and more appealing in new games.
I agree efikkan, AMD sucks, they will launch a mediocre GPU with bad drivers at first and probably consume a lot of power, but, they won't handicap a good card to force you to buy a new generation because they already suck and they don't care much about this market, they care about the millions of orders from Microsoft and Sony much more than the very very few "enthusiast" that might buy these high end cards, that's why first drivers will be so bad you are lucky if you boot into windows, why bother for practically no money compared to what they sell to consoles.
They are here to piss off Nvidia a bit but not really compete.
No sorry all of it.
Opaquifying the B's for mortals, good work.
Of course I am going to wait for 3rd party benchmarks but it's nice to see a company that isn't trying to insult your intelligence with "2X the Performance!!!!! *At 8K with RTX enabled vs a 2080"
I'll go for bang for buck as always
But when you would like to have more performance while having good bang for buck, then the 3080 is just as good bang for buck as the 5700, they come to around the same FPS/dollar (provided you can buy a 3080 right now :D).
So had @Kaleid say "Most people can't afford the 3080", then I wouldn't have any issue with that, since Ampere is rolling out at every price point just like Turing anyways.
Better wait and see RDNA 2 results to compare better bang for your buck.
The cards are in NVIDIA'S hand until they release the 2060
Nah, they will find a way to screw it up, and it better not be disabling ray tracing due to "reasons".
I reckon RDNA2 better has 30% more perf/dollar than Ampere in rasterization, only then it makes sense overlooking its lack of finesses in other area (RT/DLSS/HW Encoding/Drivers/Reflex)
Besides, I bought the 5700 used so i saved 100 USD doing so the 580 wouldn't have any chance when it comes to bang for the buck.
And since 3080 is inserted into the discussion, as it also provides lot of the money. Sure, but since I'm at 1440p it's performance would be utterly wasted with my monitor.
AMD raised the 5000 series CPU prices but I'm quite certain that very few will expect AMD to do the same with their new GPUs.
I'll change GPU next year, but not until I get to see MS direct storage in action.
so, when choose 20gb rtx 3080 and using same cpu ryzen 5900 for test...no doubt.
but, sure,its seen so clear,..amd cant make any miracle its building...still old parts and usual updates.
i mean rrx 6900 xt is 2 x 5700xt with CF mode, so rx 6900 xt cant be more than that for performance...and amds tdp value info for that performance, 300W is joke.
i say we took over 350W.and i mean if it get even 80% rtx 3080 speed.
interesting is also RT speed..and that category amd is child situation...also,RT support btw, eat more juice... 25W?
we can imagine..erh, or say wait this review Techpowerup:
RTX 3080 FE 10GB/20GB + ryzen 5900 cpu VS RX 6900 XT 16GB + ryzen 5900 cpu
i can bet 25% win for rtx 3080...and i add at least 20% more different when rtx 3090/ryzen 5900 cpu put against rx 6900 xt.
i see that rtx 3000 series, its highers gpus need powerfull cpu for partner... now i say ryzen 5900 or so, later intel 10nm and of coz 7nm one.
hmm rtx 3070/TI against rx 6900 xt....well, sure is that rtx 3070 is cheaper, and might fight hard against rx 6900 xt.... loose 10% and be 200$ cheaper....?
let see...waiting test.. only few weeks!
also,is it nice to get battleline also intel Xe gpu, then we have 3 gpus for soap!
The gpu has shit balance, like it or not, and 700 isnt exactly a steal for such product.