• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD's Reviewers Guide for the Ryzen 9 7950X3D Leaks

Going by that logic, why not test these at 720 or 480p? What he said makes sense, these resolutions are obsolete same as 1080p.
I doubt anyone spending that amount of money, 7950x3d with 4090, will be using 1080p.
These tests are downright useless and have 0 REAL value. But then again, if you were shown real case tests, the 99% of people tests and not the 0.0001% weirdo that will run this setup, you wouldn't even care to upgrade because in reality the difference is minimal, that's also true for new generation CPUs vs previous ones.

Any good review will also test that because obviously it matters for the users. But to get to a good granularity for comparison between the capabilities of different cpus you need the CPU to be the important part, screw real use cases. Cinebench is also not a real use case, nor are most of the benchmarks out there.

It's called the scientific method and means they're testing right and everyone doubting them is wrong.

Unfortunate phrasing ;)

Some third party tests, with a 3090. Still at 1080p though.

View attachment 285448


I really want to see the comparison with the 5800x3d. From the AMD numbers seems like a wash, better in some, equal in others, i'm waiting for head to head tests.

When is the embargo on this over?
 
I think you don't understand what apples to apples is then. Memory speed affects the IMC speed. When you put 6000 memory on the zen 4, you are running the IMC not just overclocked, but to the actual upper limit it can run. When you put 6000 memory on Intel, you are in fact UNDERCLOCKING the IMC compared to stock, and you are way way way below the upper limit of the IMC speed. You either run both at officially supported speeds, which is 5200 and 5600 respectively, or you run both maxed out, which is 6000 / 6400 for zen 4 and 7600-8000+ for Intel.
> claims I don't understand what apples to apples mean
> goes on to suggest running memory at different speeds because MUH IMC

People like you are why the human race is doomed.
 
Any good review will also test that because obviously it matters for the users. But to get to a good granularity for comparison between the capabilities of different cpus you need the CPU to be the important part, screw real use cases. Cinebench is also not a real use case, nor are most of the benchmarks out there.



Unfortunate phrasing ;)



I really want to see the comparison with the 5800x3d. From the AMD numbers seems like a wash, better in some, equal in others, i'm waiting for head to head tests.

When is the embargo on this over?
Well at least we know they're getting tested thoroughly on TPU ,third party reviews are what matters.

Hopefully removing differences not adding some obviously.
 
When is the embargo on this over?
Tuesday.

 
Your logic is flawed.
First of all, when testing CPU performance, you want to remove all other bottlenecks if possible. You want to see the maximum framerate a CPU can produce.
Second of all, there are no 720p or 480p monitors out there. But there are plenty of 1080p displays, with refresh rates including 480 Hz and higher.

You should watch the Hardware Unboxed video on this topic. Most people can't comprehend why CPUs are tested this way.

If you test a CPU with a GPU bottleneck, you have no idea what will happen when you upgrade your GPU. You framerate might stay exactly the same, because your CPU is maxed out.
But when you test it at 1080p, you will know exactly how much headroom you have.

This is exactly why low-end GPUs like 3050 and 3060, or even RX 6400, are tested with the fastest CPU on the market. You don't want a CPU bottleneck to affect your results.
If you read what i said i completely understand that, but what i am saying is that testing a 7950x3d and 4090 for 1080p makes no sense. No one will buy these components and use them to play 1080p. Most people with this budget will opt for a high end 4k or 1440p.

Removing all bottlenecks and testing a CPU at 1080p when the most likely scenario is to use it for 1440p and 4k makes the same sense as testing it on 720p, 480p or 6p as that other guy said.
It literally holds 0 meaning.

Also, to continue why removing all bottlenecks to test 1 part is meaningless, take into consideration synthetic benchmarks, they do just that. Why don't you go and buy your CPU based on a synthetic benchmark? Or your GPU? Because it literally doesn't matter, what you care is your use case.

Sure, you CAN use these parts with 1080p, but what makes more sense is that the benchmark for 1080p is auxiliary to the main benchmarks that are run on probable scenarios, 1440p or 4k, both with and without raytracing (and i bet on those configs the gains in performance will be negligible).
 
Last edited:
> claims I don't understand what apples to apples mean
> goes on to suggest running memory at different speeds because MUH IMC

People like you are why the human race is doomed.
Im sure people that are factually correct is the reason the human race is doomed, while people like you bathing in falsehoods are the road to salvation. Absolutely on point :D

Sure, underclocking one while overclocking the other one is apples to apples
 
Seems like the 3D cache was much more beneficial with DDR4. DDR5 almost doubles the bandwidth with similar latency, so the gains are much smaller.
But even moreso than that, Zen 4 is a quite bit faster than Zen 3, and this is primarily achieved through front-end improvements, which ultimately means it will be less cache sensitive. So we should expect it to get smaller gains from extra L3 cache (relatively speaking).

There are still cases where the additional cache helps tremendously. F1 2021 and Watch Dogs Legion see enormous gains.
Yes, those are edge cases. Cherry-picking edge cases to prove a point isn't a particular good argument, especially if you want to extrapolate this into general performance. The F1 game series have been known to be outliers for years, and I find it interesting that they don't even use the latest game in the series.

Keep in mind that CPUs aren't like GPUs; they are latency engines, i.e. designed to reduce the latency of a task. For them, latency trumps bandwidth, and L3 cache's latency advantage is even greater for Zen 4 because of Zen 4's higher clocks.
Firstly, L3 doesn't work the way most people think;
L3 (in current AMD and Intel architectures) is a spillover cache for L2, you should not think of it like a faster piece of RAM or a little slower L2. L3 will only be beneficial when you get cache hits there, and unlike L2, you don't get cache hits there from prefetched blocks etc. as L3 only contains recently discarded blocks from L2. L3 is a LRU type cache, which means every cache line fetched into L2 will push out another from L3.
You get a hit in L3 when: (ordered by likelyhood)
- An instruction cache line has been discarded from this core (or another core).
- A data cache line has been discarded from this core, most likely due to branch misprediction.
- A data cache line has been discarded from another core, but this is exceedingly rare compared to the other cases, as data stays in L3 for a very short time, and the chances of multiple threads accessing the same data cache line within a few thousand clock cycles is minuscule.

This is the reason why we see only a handful applications be sensitive to L3, as it has mostly to do with instruction cache. For those who know low level optimization, the reason should be immediately clear; highly optimized code is commonly known to be less sensitive to instruction cache, which essentially means better code is less sensitive to L3. Don't get me wrong, extra cache is good. But don't assume software should be designed to "scale with L3 cache", when that's a symptom of bad code.

Secondly, regarding latency vs. bandwidth;
Latency is always better when you look at a single instruction or a single block of data, but when looking at real world performance you have to look at the overall latency and throughput. If for instance a thread is stalled and waiting for two or more cache lines to be fetched, then slightly higher latency doesn't matter as much as bandwidth. This essentially comes down to the balance between data and how often the pipeline stalls. More bandwidth also means the prefetcher can fetch more data in time, so it might prevent some stalls all together. This is why CPUs overall are much faster than 20 years ago, even though latencies in general have gradually increased.
But this doesn't really apply to L3 though, as the L3 cache works very differently as described above.

Lastly, when compared to a small generational uplift, like Zen 2 -> Zen 3 or Zen 3 -> Zen 4, the gains from extra L3 is pretty small and the large gains are mostly down to very specific applications. This is why I keep calling it mostly a gimmick. If you on the other hand use one of those applications where you get a 30-40% boost, then my all means go ahead an buy one, but for everyone else, it's mostly something to brag about.

Intel 13900K official supported memory is DDR5 5600 : https://www.intel.com/content/www/u...-36m-cache-up-to-5-80-ghz/specifications.html everything above that is based on your luck at silicon lottery.
Not to mention that you are likely to downgrade that speed over time (or risk system stability).
 
Sure, you CAN use these parts with 1080p, but what makes more sense is that the benchmark for 1080p is auxiliary to the main benchmarks that are run on probable scenarios, 1440p or 4k, both with and without raytracing (and i bet on those configs the gains in performance will be negligible).

All you did is confirm you do not understand the process.

Example 1:

1080p - you get 60 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, upgrading your GPU would provide NO performance increase in 1440p. ZERO.
In 4K, you would only gain a maximum of 50% extra performance, even if the new GPU was twice as fast.
How would you know this without the 1080p test?

Example 2:
1080p - you get 100 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, the CPU bottleneck happens at 100 FPS. Which means you can get 67% more performance in 1440p after upgrading the GPU, and you can get 150% more performance in 4K.
You know the maximum framerate the CPU can achieve without a GPU bottleneck, which means you know what to expect when you upgrade your GPU.

What's important is to have this data for as many games as possible. Some games don't need a lot of CPU power, some are badly threaded, and some will utilize all 8 cores fully.

If you test in 4K, you're not testing the maximum potential of the CPU. You want to know this if you're planning on keeping your system for more than two years. Most people WILL upgrade their GPU before their CPU.
Seriously, please just go watch the Hardware Unboxed video.
 
If you read what i said i completely understand that, but what i am saying is that testing a 7950x3d and 4090 for 1080p makes no sense. No one will buy these components and use them to play 1080p. Most people with this budget will opt for a high end 4k or 1440p.

Removing all bottlenecks and testing a CPU at 1080p when the most likely scenario is to use it for 1440p and 4k makes the same sense as testing it on 720p, 480p or 6p as that other guy said.
It literally holds 0 meaning.

Also, to continue why removing all bottlenecks to test 1 part is meaningless, take into consideration synthetic benchmarks, they do just that. Why don't you go and buy your CPU based on a synthetic benchmark? Or your GPU? Because it literally doesn't matter, what you care is your use case.

Sure, you CAN use these parts with 1080p, but what makes more sense is that the benchmark for 1080p is auxiliary to the main benchmarks that are run on probable scenarios, 1440p or 4k, both with and without raytracing (and i bet on those configs the gains in performance will be negligible).
I think the bit your not getting is that reviews don't benchmark to give customers an idea of how it will typically run on Their personal 144Hz 1440p or 4k 4090 equipped hardware, even in a 4090 review.
It's to see how each GPU, in this case the 4090 is compared to another few, now and in the future.
A subtle difference.
 
All you did is confirm you do not understand the process.

Example 1:

1080p - you get 60 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, upgrading your GPU would provide NO performance increase in 1440p. ZERO.
In 4K, you would only gain a maximum of 50% extra performance, even if the new GPU was twice as fast.
How would you know this without the 1080p test?

Example 2:
1080p - you get 100 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, the CPU bottleneck happens at 100 FPS. Which means you can get 67% more performance in 1440p after upgrading the GPU, and you can get 150% more performance in 4K.
You know the maximum framerate the CPU can achieve without a GPU bottleneck, which means you know what to expect when you upgrade your GPU.

What's important is to have this data for as many games as possible. Some games don't need a lot of CPU power, some are badly threaded, and some will utilize all 8 cores fully.

If you test in 4K, you're not testing the maximum potential of the CPU. You want to know this if you're planning on keeping your system for more than two years. Most people WILL upgrade their GPU before their CPU.
Seriously, please just go watch the Hardware Unboxed video.
Mate, i have been watching those guys for years and more advanced channels/forums as well. You still fail to understand my reasoning but i won't continue this in the thread, feel free to message me if you need more explanation than already given.
 
There's nothing for you to explain. You need stuff explained to you, but you're unwilling to listen.

4K testing is for the "here and now", it only shows you how current hardware behaves.
1080p testing is for both now and the future, it tells you how the CPU will behave in 2 years or more, when more powerful GPUs are available.
 
If you read what i said i completely understand that, but what i am saying is that testing a 7950x3d and 4090 for 1080p makes no sense. No one will buy these components and use them to play 1080p.
CPUs should be tested with CPU bound settings. The resolution is irrelevant. If you are CPU bound at 4k then you can test them at 4k. Testing CPUs on non CPU bound settings is completely and utterly pointless. It gives you absolutely no information whatsoever.
 
CPUs should be tested with CPU bound settings. The resolution is irrelevant. If you are CPU bound at 4k then you can test them at 4k. Testing CPUs on non CPU bound settings is completely and utterly pointless. It gives you absolutely no information whatsoever.

It gives you the information that it doesn't matter what CPU you have for those settings. Which is the point.

It just depends what your use case is. No one should be thinking results at 720p and 1080p are going to translate to massive gains at resolutions they're actually going to use in the real world.

So no need to drop $1k on a CPU when you can get one for $250 and not notice any difference... Save the $750 and spend it on something that will positively impact your use case instead.
 
It gives you the information that it doesn't matter what CPU you have for those settings. Which is the point.

It just depends what your use case is. No one should be thinking results at 720p and 1080p are going to translate to massive gains at resolutions they're actually going to use in the real world.

So no need to drop $1k on a CPU when you can get one for $250 and not notice any difference... Save the $750 and spend it on something that will positively impact your use case instead.
And 1080p results also tell you that as well, so why do you need to test at 4k? If I see a CPU getting 100 fps at 1080p then obviously it can get 100fps at 4k, if the card allows, so what exactly did a 4k CPU test offer me?
 
Wow, what a waste of effort. I expect virtually nothing for productivity software. This seems be far weaker than the 5800X3D uplifts despite AMD's hype. Also overhyped and lied about 7900XT(X) performance too.

I could care less about about gaming performance with cpu's like Zen 4 and Raptor Lake, they are more than strong enough. For productivity I still think 13700K is the sweet spot, but will wait and see if the RL refresh is more than a tweak to clock speeds.
Remember the 7000 series already doubled the L2 cache over the 5000 series. Likely why additional L3 doesn’t make as much of an improvement.
 
CPUs should be tested with CPU bound settings. The resolution is irrelevant. If you are CPU bound at 4k then you can test them at 4k. Testing CPUs on non CPU bound settings is completely and utterly pointless. It gives you absolutely no information whatsoever.
That's sarcasm, right?
CPUs, GPU, and other hardware should be tested at relevant settings and workloads, anything else is utterly pointless for deciding which one to purchase.

If you want to induce artificial workloads to find theoretical limits then that's fine for a technical discussion, but this should not be confused with what is a better product. How a product behaves under circumstances you don't run into is not going to affect your user experience. Far too many fools have purchased products based on specs or artificial benchmarks.
 
It gives you the information that it doesn't matter what CPU you have for those settings. Which is the point.

It just depends what your use case is. No one should be thinking results at 720p and 1080p are going to translate to massive gains at resolutions they're actually going to use in the real world.

So no need to drop $1k on a CPU when you can get one for $250 and not notice any difference... Save the $750 and spend it on something that will positively impact your use case instead.
Don't you think that a person who buys a GPU over $1000 will just want to find out which is the best CPU for gaming regardless of the price, also having a very strong(best) CPU ensures that you don't have to change it for a long time and just upgrade the GPU, plus, it is likely that the successor to the 4090 will be so powerful that it starts to be limited by weak CPUs even at 4k.

Anyway, I agree with the idea that the ideal is to test in 1080p, 1440p and 4k.
 
That's sarcasm, right?
CPUs, GPU, and other hardware should be tested at relevant settings and workloads, anything else is utterly pointless for deciding which one to purchase.

If you want to induce artificial workloads to find theoretical limits then that's fine for a technical discussion, but this should not be confused with what is a better product. How a product behaves under circumstances you don't run into is not going to affect your user experience. Far too many fools have purchased products based on specs or artificial benchmarks.
Are we at the point where it's considered sarcasm to test CPUs at....CPU bound settings? Just wow :roll:

It's so freaking easy to demonstrate the fallacy into your logic which begs the question how can you not notice it yourself. CPU A and CPU B both cost 300€.

CPU A gets 100 fps at 4k and 130 fps at 1080p.
CPU B gets 100 fps at 4k and 200fps at 1080p.

If you want a CPU just to play games, why the heck would you choose CPU A, since obviously CPU B will last you longer, and how the heck would you have known that unless you tested in CPU bound settings? I mean, do I really have to explain things like it's kindergarten?
 
One of the main problem of CPU testing is not the resolution they test, but the kind of game used by reviewer. It's almost always FPS/ 3rd person action game and there is very few if none of the game that are actually most of the time CPU bound.

Just build a very large base in valheim and you will be CPU limted at 4K. Build large town in a lot of colony/city builder game and you will be CPU bound at 4k. Even today, in MMORPG, in area where there is a lot of people or in raid, you can be CPU bound with modern CPU. In some case, it's laziness or lack of time to get produce a proper save (Like building a large base in Valheim or building a large city or factory in other type of games) in other, it's just very hard to test (like mmorpg).

But most of the time, the data extracted from non GPU-limited scenario can be extrapolated up to a degree to those kind of game so it's still worth it if it's the easiest thing to do.
 
Mate, i have been watching those guys for years and more advanced channels/forums as well. You still fail to understand my reasoning but i won't continue this in the thread, feel free to message me if you need more explanation than already given.
Try to understand it harder, then.

Actually, they should be testing at 480p to find out the maximum framerate the CPU can give. they don't do that because they would get even more 'it's not realistic' comments. The scientific method is hard to grasp sometimes.
 
Are we at the point where it's considered sarcasm to test CPUs at....CPU bound settings? Just wow :roll:
Any sense of rationality is completely gone from your comment, just look at the previous one;

Testing CPUs on non CPU bound settings is completely and utterly pointless. It gives you absolutely no information whatsoever.
This is not only nonsensical, it's actually blatantly false. It absolutely gives you a lot of information to benchmark real workloads at realistic settings, as this tells the reader how the contending products will perform in real life.

It's so freaking easy to demonstrate the fallacy into your logic which begs the question how can you not notice it yourself. CPU A and CPU B both cost 300€.

CPU A gets 100 fps at 4k and 130 fps at 1080p.
CPU B gets 100 fps at 4k and 200fps at 1080p.

If you want a CPU just to play games, why the heck would you choose CPU A, since obviously CPU B will last you longer, and how the heck would you have known that unless you tested in CPU bound settings? I mean, do I really have to explain things like it's kindergarten?
Your example is nonsense, as you wouldn't see that kind of performance discrepancy unless you are comparing a low-end CPU to a high-end CPU when benchmarking a large sample of games. And trying to use a cherry-picked game or two to showcase a "bottleneck" is ridiculous, as there is no reason to assume your selection has the characteristics of future games coming down the line.

The best approach is a large selection of games at realistic settings, and look at the overall trend eliminating the outliers, that will give you a better prediction of what is a better long-term investment.
 
Any sense of rationality is completely gone from your comment, just look at the previous one;


This is not only nonsensical, it's actually blatantly false. It absolutely gives you a lot of information to benchmark real workloads at realistic settings, as this tells the reader how the contending products will perform in real life.
Any sense of rationality is completely gone from YOUR comment. A GPU bound testg ives you ABSOLUTELY no information about how the cpu performs. If you wanted to see how the GPU performs then go watch the GPU review, why the heck are you watching the cpu one?

I'm sorry but no point arguing anymore, you are just wrong. If I followed your advice only looking at 4k results I would have bought an 7100 or a g4560 instead of an 8700k since they performed exactly the same at 4k with the top GPU of the time. And then I upgraded to a 3090. If the graph below doesn't make you realize how wrong you are, nothing will so you are just going to get ignored

perfrel_3840_2160.png
 
im not impressed with these results from a processor that costs $120+ more than the current top dog at its current retail price. Realistically performance may actually be less since AMD likely cherry picked some of those results. Regardless i think the real performance gains will matter on the lower SKUs as they will be cheaper and more direct competing on price. Im looking forward to the 7800x3d in april.

The price of cache is expensive. more expensive then just adding more cores to the party.

Its designed for a certain type of workload, games seem to benefit the most from additional L3 cache.

Other then that in regular apps it wont be any better then a flagship. Most likely due to higher sustained clocks.
 
I'm getting excited for W1zzards review of the 7900x3d in a few days...

Happy Dance GIF
 
why only 1080p?
 
Back
Top