• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Releases Even More RX 6900 XT and RX 6800 XT Benchmarks Tested on Ryzen 9 5900X

I think ZEN3 5600X is good value as is RX6800. That said I think the ZEN3 5800X could be the optimal CPU for gaming a bit higher base clock/peak over the ZEN3 5600X with two additional cores, but a tougher pill to swallow on the price if on a limited budget. I suppose these days I'd probably opt for that than stepping up to a RX6800XT to be fair personally I'm finding CPU core count and base frequency more and more appealing from a overall system standpoint. I think 'd get more mileage in the long run plus the RX6800 is great value for RNDA2 judging from what I've seen thus far.
 
Which doesn't seem to be to need brute-force path tracing anyway:



Besides, I wouldn't be surprised if Zen 3 "infinity cache" inside RDNA2 lets it spank Ampere even on that (rather quite useless for now and in forseable future) front.
:rockout:
 
@btarunr

Thanks a lot for sharing the results.
Looks very promising.

Waiting for review from TPU especially for 6800XT to decide if that is my next GPU. :rolleyes:
I would be interested to see how the performance is with Zen 2 without SAM.

When Nvidia launched their new RTX generation I was thinking that it would really tough for AMD to match that but looking at the benchmarks until now it is really impressive to see AMD is catching up to NVidia at least at non DXR. :)

Let's hope that these will be available on the release day in larger quantities and not be sold out within minutes after release giving one option to buy it after reading reviews and not disappear from the online stores while one is looking at reviews. :roll:
 
games need more polygons and way better textures.
According to GPU database in Techpowerus RX 6900 XT has more pixel performance and more texel performance than RTX 3090.
PP 280 vs 190
TP 720 vs 556.
LoL. AMD is more future proof for long-term use!
PS. RX 6800 XT also is better than RTX 3090 if we rely only on a comparison of these numbers.
 
According to GPU database in Techpowerus RX 6900 XT has more pixel performance and more texel performance than RTX 3090.
PP 280 vs 190
TP 720 vs 556.
LoL. AMD is more future proof for long-term use!
PS. RX 6800 XT also is better than RTX 3090 if we rely only on a comparison of these numbers.
I know what you are trying to say here but these cards are different. These should not be compared 1 to 1 considering the hardware.
 
I know what you are trying to say here but these cards are different. These should not be compared 1 to 1 considering the hardware.
This will show results only for time when it's will be compared for first, not related for term of how long in time cards will be relevant in the future. I think that AMD cards even if they don't show a big advantage in first reviews, in the future they will perform even better compared to the competing models from Nvidia's 30* series.
 
This will show results only for time when it's will be compared for first, not related for term of how long in time cards will be relevant in the future. I think that AMD cards even if they don't show a big advantage in first reviews, in the future they will perform even better compared to the competing models from Nvidia's 30* series.
???

Fine wine? A couple % uptick overall more in a title or two? I wouldn't hold my breath for that. And those numbers you quoted don't add up to your conclusion.
 
???

Fine wine? A couple % uptick overall more in a title or two? I wouldn't hold my breath for that. And those numbers you quoted don't add up to your conclusion.
All be clear in future. Ат the moment we can only guess, based on the characteristics we know at the moment, how things will develop in the future. It is not possible to present facts that have not yet happened.
 
All be clear in future. Ат the moment we can only guess, based on the characteristics we know at the moment, how things will develop in the future. It is not possible to present facts that have not yet happened.
Im glad you understand that concept... apply it. :p
 
According to GPU database in Techpowerus RX 6900 XT has more pixel performance and more texel performance than RTX 3090.
PP 280 vs 190
TP 720 vs 556.
LoL. AMD is more future proof for long-term use!
PS. RX 6800 XT also is better than RTX 3090 if we rely only on a comparison of these numbers.
As you might figured out already, those numbers are telling absolutely nothing about actual performance of a card. Same with TFLOPS. Its just for reference. Raw fillrates, computing power and VRAM bandwidth cannot be directly comparable between different architecture GPUs. Not even if GPUs are made under the same brand.
And you cant predict either the future performance gains or losses of a GPU against another product as the factors related are far too many.
 
Im glad you understand that concept... apply it. :p
Hmm, next factors to Nvidia incompLetences with VRAM size(exclude partially only RTX 3090 and include all other models, 3080 10GB; 3070 8GB; 3060 ti(?):


First:
AMD will support all ray tracing titles using industry-based standards, including the Microsoft DXR API and the upcoming Vulkan raytracing API. Games making of use of proprietary raytracing APIs and extensions will not be supported.
— AMD Marketing
.....
AMD has made a commitment to stick to industry standards, such as Microsoft DXR or Vulcan ray tracing APIs. Both should slowly become more popular, as the focus goes away from NVIDIA’s implementation. After all, Intel will support DirectX DXR as well, so developers will have even less reason to focus on NVIDIA’s
Second:

Interestingly, Keith Lee revealed that in order to support 4X x 4X UltraHD textures a 12GB VRAM is required. This means that Radeon RX 6000 series, which all feature 16GB GDDR6 memory along with 128MB Infinity Cache should have no issues delivering such high-resolution textures. It may also mean that the NVIDIA GeForce RTX 3080 graphics card, which only has 10GB of VRAM, will not be enough
Links are below "First & Second"!
 
NV uses DXR, same as AMD.....

10GB may fall short at 4K in a few years... but by then, you'll want another GPU anyway. Even DOOM on nightmare doesn't eclipse 10GB @ 4K.

As you might figured out already, those numbers are telling absolutely nothing about actual performance of a card. Same with TFLOPS. Its just for reference. Raw fillrates, computing power and VRAM bandwidth cannot be directly comparable between different architecture GPUs. Not even if GPUs are made under the same brand.
And you cant predict either the future performance gains or losses of a GPU against another product as the factors related are far too many.
I'm giving up. ;)
 
The RX 5700 XT didn't really overclock great (2,000+ MHz only yielded at most 10 FPS with most models) as well, but we'll see how the 6800 XT works out.
The 5700XT OC'd pretty well (went up in frequency) but gains were small due to it already being memory bandwidth starved. Here, the memory architecture was completely overhauled, and at least the infinity cache should go up in speed with the core while overclocking, so it should be quite interesting to see...
 
That's an assumption on your part and not a very logical one, especially considering that NVidia has already had 2 years to gain a lead in both deployment and development of RTRT.
It's a very logical assumption, given who commands console market (and situation in the upcoming GPUs too).

More likely scenario, though, is that in that form (brute force path tracing) it will never take off.

is catching up

Smaller chips, lower power consumption, slower (and cheaper) VRAM, more of it, for lower price than competition and better perf/$ than competition.
Catching up, eh? :)
 
One of the reason of AMD fine wine is just that AMD took more time to polish their drivers because they have way less resource than Nvidia to do so.

Another is that CGN balance between fillrate/texture rate vs compute performance was a bit more on the Compute side. NVidia on the other hands focused a bit more on the fill rate side.

Each generation of games was shifting the resource from fill rate to compute by using more and more power and AMD GPU in a better position. But not really enough to make a card last way longer. Also the thing is low end cards where outclassed anyway were High end cards were bought by people with money that would probably change them as soon as it would make sense.

It look like that AMD with NAVI went to a more balanced setup where Nvidia is going onto the heavy compute path. We will see in the future what is the better balance but right now it's too early to tell.

So in the end, it do not really matter. a good strategy is to buy a PC at a price that you can afford another one at the same price in 3-4 year and you will always be in good shape. If paying 500$ for a card every 3-4 years is too much, buy something cheaper and that's it.

there is good chance that in 4 years, that 500$ card will be beaten by a 250$ card anyway. Even more when we think they are going to chiplet design with GPU. that will drive a good increase on performance.
 
Oh, do help us all understand your point in more detail...
Stranger talking about self in plural, you are seriously asking why anyone would optimize games for the LION's share of the market?
 
Stranger talking about self in plural, you are seriously asking why anyone would optimize games for the LION's share of the market?
Then why aren't you? Hmm? Perhaps because you know both that there is a counter argument and that such an argument is perfectly valid. It's as valid now as it has been since the Console VS PC debate began.
 
It's a very logical assumption, given who commands console market (and situation in the upcoming GPUs too).

Considering they had consoles las generation as well, how did that whole optimizing for AMD architecture go?
 
The explanation is extremely easy. In the past, AMDs were not ready to take advantage of the fact that the hardware of the old consoles had components developed by them. However, now they can and do!
It is the opposite imo. After they programmed the radeon profiler, they found out about the intrinsic limits of the hardware.
Yes, the scheduler was flexible, as it was announced to be in its launch, but instruction reordering does not necessarily mean the full extent of its performance. IPC still was 0.25 and now that it is 1 is a lot in comparison. They have all these baked-in instructions doing the intrinsic tuning for them in the hardware. The isa moved away from where gcn was by a great deal. Plus, they have this mesh shader which abnegate the triangle pixel size vs wavefront thread cost to deal with it in hardware. Performance really suffered with <64 pixel area triangles. Not so, any more.
 
Considering they had consoles las generation as well, how did that whole optimizing for AMD architecture go?

Oh, that is easy, my friend.
EPIC on UE4 "it was optimized for NVidia GPUs".
EPIC today, demoes UE5 on RDNA2 chip running on the weaker of the two next gen consoles, spits on Huang's RT altogether, even though it is supported even in UE4.



There is more fun to come.

Recent demo of XSeX vs 3080 was commended by a greenboi like "merely 2080Ti levels".
That is where next gen consoles ar > 98-99% of the PC GPU market.

Then why aren't you?
It was a rhetorical question.
 
@medi01 , no idea what you just said.
 
Back
Top