• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Big Navi Performance Claims Compared to TPU's Own Benchmark Numbers of Comparable GPUs

5500XT competes with 1650 super.

there is no AMD solution for 1650 Vanilla. Which I find it odd, and perhaps they fix it this generation.
I would say Polaris and you may be on to something. Polaris rather old but I am sure it has sold well for AMD. Navi did go through a rough patch but everything seems to be roses now (the memory clock jumps all over the place). A budget refresh that has the same GPU as the PS5 or Xbox1 would be nice for $169 US.
 
Nvidia will also optimize their drivers and unlock another 10-15% performance, they done this many times in the past.

Cheeky monkeys. Not only are these cards SUPER expensive they tap away the higher performance ceiling.
 
One thing people gloss over is AMD has just announced "the fastest gaming CPUs". I bet these results are obtained running said CPUs.

Yeah pretty funny that AMD used a 2080 Ti for benchmarking their CPU against 10900K, but somehow can't bench the 2080 Ti against their RX 6000. Pretty much a dead giveaway that AMD aren't so confident about their RX 6000 GPU.
AMD is now trying to crank as much clocks outta these chip so that they can be justifiable purchase.
 
It is weird that only TPU gets +10FPS boost on 4K in Borderlands, but Nvidias own official slide on their webpage shows 3080FE card scoring exactly 61 fps in that game... hmmmm
Also Youtuber "joker" tested his 3080FE card and got 57.7fps at 4K, which is in line what Nvidia officialy got. And now TPU gets 70fps :D Suspicious.
Yup, very weird. It's almost as people are testing using different systems or something.

Yeah pretty funny that AMD used a 2080 Ti for benchmarking their CPU against 10900K, but somehow can't bench the 2080 Ti against their RX 6000. Pretty much a dead giveaway that AMD aren't so confident about their RX 6000 GPU.
Well, of course they can't bench against RX 6000, they're also under NDA.
 
Well, of course they can't bench against RX 6000, they're also under NDA.

Huh, they are the one who give out NDA :D, not under NDA.
 
They (AMD) are the one who knocks!
 
Yeah pretty funny that AMD used a 2080 Ti for benchmarking their CPU against 10900K, but somehow can't bench the 2080 Ti against their RX 6000. Pretty much a dead giveaway that AMD aren't so confident about their RX 6000 GPU.

Pretty funny that they didn't want to benchmark GPUs that aren't even announced yet. Yeah, funny and strange ...

AMD is now trying to crank as much clocks outta these chip so that they can be justifiable purchase.

You mean like Ampere which basically has zero frequency headroom left ? Anyway, those aspects are decided early in the development process as they are closely linked with the manufacturing process, so they aren't cranking up anything.
 
Simple, give me good price/perf ratio then I will gladly switch to RED for my 2nd workstation rig. (2x 5700XT performance with the same MSRP of 5700XT or 600$ territory, then I will take 2 of your 6000 series or even your XTX version)
Otherwise, I will stay with green. :laugh:
 
Yeah pretty funny that AMD used a 2080 Ti for benchmarking their CPU against 10900K, but somehow can't bench the 2080 Ti against their RX 6000. Pretty much a dead giveaway that AMD aren't so confident about their RX 6000 GPU.
AMD is now trying to crank as much clocks outta these chip so that they can be justifiable purchase.

The 2080Ti is probably the fastest commercially available GPU that AMD could get it's hands on (3080/3090 are too hard to get even for AMD). As to why they would use commercially available hardware, so that all us readers, reviewers and tech news analysts can make sense of the results based on our own experience with the hardware. According to you, it would be okay for AMD to take a future 1 year down the road GPU that is working in their labs and release public numbers with it. That makes no sense. With the 2080Ti we can all verify AMD's claims about how the CPU competition performs.
 
Low quality post by dicktracy
I didn't put any words in your moth, you gotta chill. You said this is "probably the top model" because she mentioned Big Navi and I pointed out that "Big Navi" was never thought of as a specific model but rather simply the silicon. You basically phrased in a way that implies Big Navi could only mean the top end card and I don't think so, there are going to be at least two cards on that "Big Navi" die.
Oh come on now. Give me a break. Yes you did. Because you want to create an argument based on which, you will come out and say that this is NOT the top model. Well, you can't say that, I can't say that, most people can't say which model it is.

But we can speculate.

And when people talk about "Big Navi" they talk about the TOP model. Not the silicon. You can have 15 models based on the biggest core, but that "Big Navi" nickname was always, in discussions between people or tech cites, the TOP MODEL. NOT one of the top models. But THE TOP MODEL. You don't start conversations about the second model of an unknown line of cards.

And let me ask you a question. Almost everyone are asking the same question. Will Big Navi be faster than 2080 Ti? And how much? Can it be close, equal or faster than 3080?
Tell me. Does those questions make any sense if they are about the SECOND in line top model?

You ignore logic and in my case, make me look like someone stupid, just so you can build an argument about something that you also don't know.

If you have any reason to believe "Big Navi" is a specific card/model go ahead and explain that.

See what you are doing here? Again you put words in my mouth by distorting what I mean. because you should know and you probably know, that saying that "Big Navi" is a reference for the TOP model, does not have ANYTHING to do with how many models will be based on the biggest chip.
 
Maybe we'll see a Crossfired Pro Duo 6000 series competing with green team in Port Royal

... or AMD did some engineering voodoo to offer a 4k 60 FPS card cheaper than NVIDIA
 
Huh, they are the one who give out NDA :D, not under NDA.
Note the part highlighted in red...
Until a product is announced or launched, it tends to be confidential.

eviUlCYWKjcAPUnV.jpg
 
Note the part highlighted in red...
Until a product is announced or launched, it tends to be confidential.

Then why bother show anything at all. These FPS figures are useless by themselves.

"Hey let just bench the lightest scenes in these games, get some useless FPS numbers to hype our product." - AMD CEO to PR guys

Sure that worked out well for Polaris and Vega, just some bullshit numbers here and there until the actual release and then boom yeah our GPUs are so bad, we are selling them for almost nothing.
 
Gears 5 results from different sources (all using DX12): Guru 76 fps

pgfdBNu.jpg


Techspot: 72 fps


4K_Gears.png


Eurogamer: 80 fps

Average of the 3 is 76 fps. AMD shows 73 fps.


amd use its new ryzen 5900 cpu,its help alot,also it help rtx 3080 nd evn more rtx 3090.

well, let see, but real world is that kind...
What the heck are you talking about? In 4K, there is no difference between CPUs as there is a GPU bottleneck.
I would bet there is not much manipulation in the results. Why? Because they showed the CPU gaming results where there was one game in which Intel was faster.


One thing people gloss over is AMD has just announced "the fastest gaming CPUs". I bet these results are obtained running said CPUs.

Huge difference, isn't it? :D :D
relative-performance_3840-2160.png
 
"AMD has probably cherry-picked games that are most optimized to its GPUs "
i dont think the engine is optimized for AMD instead it is for Nvidia...
It's neither.
You would need to have special API features to optimize for one of them.
Sponsored titles might have exclusive graphical features, but not vendor-specific performance optimizations.

It is rumored that Big Navi will at some point get a memory bandwidth upgrade, probably GDDR6X. Some reasons for the launch with GDDR6 might be:
1)Limited availability of GDDR6X
2)Cost
3)No time or resources to develop the memory controler. GDDR6X uses PAM4, it's a four state coding format instead of binary coding and needs a different memory controller.
The time required to implement it is measured in years. Regardless of the choice of memory controller, this was decided a long time ago.
 
I could live with that if the price is belowe the 3080
3080 is a virtual product, so don't hold your breath.

I love how performance between 2080Ti and 3080 is just "trades blows with 3070".
 
It is weird that only TPU gets +10FPS boost on 4K in Borderlands, but Nvidias own official slide on their webpage shows 3080FE card scoring exactly 61 fps in that game... hmmmm
Also Youtuber "joker" tested his 3080FE card and got 57.7fps at 4K, which is in line what Nvidia officialy got. And now TPU gets 70fps :D Suspicious.

I made couple of modified slides from available benchmarks with RTX3080 and where that Navi 6000 series stacks up againts.
That card what AMD showed yesterday is dead on 1:1 level with RTX3080 FE model!

Everybody saying that RTX3080 will gain performance also with Ryzen 5000 series cpu.. no it wont. Because @4k where you are still very GPU limited.
But if I am in giving mood- I would say that at best case scenario RTX3080 may benefit max 1-2fps with AMD's new best of the best CPU: 5950X.

TPU has done a benchmark/performance test on Borderlands 3 when it came out - pre RTX 3080. They tested the game in DX12 with available cards at the time and 2080Ti was the top tier card then.

When the 3080 launched, Borderlands 3 was benched with it, but only in DX11. It says so above the charts on the 3080 review for the Borderlands 3 game, just go here and read for yourself or click the spoiler button below.
Borderlands 3 is brought to life with Unreal Engine 4, which takes advantage of DirectX 12 and Direct X 11 with some AMD-specific features, such as FidelityFX. In our testing, we used DirectX 11 because the DirectX 12 renderer has extremely long loading times and some instability. We will switch to the new renderer once these issues are ironed out.

AMD showed that their "Big Navi" card was running 61FPS in DX12 for Borderlands 3.
TPU has not ran the 3080 in DX12 for Borderlands 3. They only ran DX11 and that is the benchmark that shows 70.3 FPS.

I don't know where your confusion is other than you probably overlooked the fact that TPU's RTX 3080 benchmark for Borderlands 3 is only for DX11 and maybe you thought it was for DX12.
 
I would imagine AMD wouldn't give out the numbers for their top end card. They want to wow people in Nov and while coming close to the 3080 is awesome I have a feeling they were talking about a card that's 1 model down from the current high end one and that doesn't mean they couldn't one up it again later like nVidia does with super or ti's.
 
Gears 5 results from different sources (all using DX12): Guru 76 fps
Techspot: 72 fps
Eurogamer: 80 fps
Average of the 3 is 76 fps. AMD shows 73 fps.
What the heck are you talking about? In 4K, there is no difference between CPUs as there is a GPU bottleneck.
I would bet there is not much manipulation in the results. Why? Because they showed the CPU gaming results where there was one game in which Intel was faster.
Huge difference, isn't it? :D :D

You are just wasting time, I can just look at the ground in those games to get 200FPS at 4K.

There are too many variables involved that you can't compare results from one review site to the other. Even results from the same review site can't be used if there are changes to the testing systems (new CPU, RAM, Motherboard, Drivers, etc...).

Those FPS figures are useless without any base for comparison, which AMD are witholding.
 
Lisa clearly said Big Navi optimization was done with Ryzen 5000 series SO because Ryzen 5000 is faster it will yield more performance in games for the 3080 too so you cannot make a direct comparison like that.

Which one of the tested games is CPU bottlenecked at 4k pretty please for CPU to even matter?
Of course AMD tested with own CPU as they now claim gaming CPU top spot.
 
Back
Top