• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Ryzen 7 5800X

Why not 3080 in test setup?
 
Why not 3080 in test setup?
Because all other CPUs were tested with 2080 ti
I believe the zen 3 will be more powerful when all the CPUs are tested with the 3080/3090 and faster ram
 
Because all other CPUs were tested with 2080 ti
I believe the zen 3 will be more powerful when all the CPUs are tested with the 3080/3090 and faster ram
Hardware Unboxed tested with a 3090, so that could be an indication that the 5000 series will perform better on newer graphics cards.
Maybe. Perhaps.
 
Excellent review Wizz (as usual) this is a very compelling CPU for existing AM4 users. My question is there are some new X570 boards that have been released around and a little before the 5000 series launch like the Asus X570 Prime Pro and the new As Rock X570 Velocita. Do these boards have the full 16 lanes from the chipset available? I see that they seem to PCIe 4 x8 across 2 x16 slots and have several NVME x4 as well. The As Rock board even has 8 SATA ports. I want an X570 board that uses all 40 lanes without having a U2 or some other dead connector. My TR4 build is great for coding and video production but my X570 build walks all over it in Gaming. I want a board that gives me the Gaming performance of X570 and the PCIe bandwidth close to TR4.
 
Erm... Having the same fps on average as a cpu that costs 300€ less doesn't mean winning. The way Lisa was bragging a few weeks ago i was expecting complete domination. This aint it.
The issue with that techspot benchmark is that it's only showing the top cpu, but the few other people who bothered to bench the 5600x on launch shows that it can beat the i9 . Tpu seems to be the only english website that didn't jumped on the 5950x and the 5900x and let the 5800x and 5600x rot for a while.
1604592914533.png


Even GN don't have time for the smaller CPUs
1604593095653.png
 
So it's not the gaming king? :rolleyes:
This was tested with a 2080Ti a relevant question is if RTX 3090 or RNDA2 GPU starts to shift performance a little the other direction. Pairing a stronger GPU with a multi-core CPU that has more additional combined cache from additional physical cores is a fairly relevant consideration. Game engine and OS level improvements could improve overall multi-cores performance there are obviously limitations in area's, but there are also gains in area's as well. I general would agree with what's said and implied. There are interesting aspects here on the 5800X performance in different regions and lines of thinking. The Ryzen 3900X and 3900XT are $10's and $20's more while offering 4 additional cores. In some scenario's they win relative to the Ryzen 5800X in other area's they lose. I'd actually argue some of those area's have implications about the overall future gaming and where things are headed as well in regard to that.

Take blender for example with path tracing the 3900X/3900XT appear to have real benefits over the Ryzen 5800X and how that shakes out in regard to future GPU innovations is a bit up in the air if GPU's can utilize that edge for RTRT performance in scenario's where that matters perhaps there performance for gaming pulls out ahead or narrows the gap. Another interesting scenario is the compression and/or decompression results and it's a split with WinRAR Ryzen 5800X wins while in 7-Zip the 3900X/3900XT wins. I don't know how NTFS compression plays into things with both nor LZX or XPRESS 4K/8K/16K, but it's all actually fairly relevant information potentially to know about those differences. I'm curious how things will shake out as GPU's get closer and closer to more RTRT performance that more closely correlates with path tracing. I know path tracing and the way RTRT is being handled have distinctive differences for now anyway, but the real question is how they intertwine and how it relates to GPU innovation moving forward as well and in regard to multi-core performance as well and yes even compression/decompression that can impact things. If you're using a NVME drive as a fast storage device and it's not for the OS itself with write logging for example and rarely write the device and rely primarily on read performance I'd absolutely recommend enabling NTFS compression on the device and/or using LZX or XPRESS 4K/8K/16K compression on it for both the storage density gains from compacting the contents within the device as well as the bandwidth gains by shrinking them down in size.
The Ryzen 5800X show definite advantages to the Ryzen 3900X and 3900XT at lower cost though when more latency sensitive use cases came into play on the other hand so it's quite a mixed bag as to which is optimal and why and where some of the results could shift and morph over time to skew results and exaggerate things in favor of one direction versus the other. I forget now if the Zen 2 CPU's have support for infinity cache or not.

I'm curious of the Blender path tracing results times drop if you compress the test data files involved in the benchmark test with various compression methods or if it actually doesn't impact those results. I think that has real solid implications of where RTRT performance could head as GPU innovation improves in regard to multi-core performance. If the compression aspect becomes more critical to performance and favors more heavily leaning multi-core hardware rather than a slight frequency or IPC edge that's a consideration as well which offers me the best long term performance as opposed to what's best here and now. Intended use cases as well as projected future use cases aspects of consideration. I think most all of us agree path tracing is fantastic and we all wish that performance could be achieved in real time ray tracing at 60FPS + with resolutions we are able to play at. That would be quite amazing. I think the future of ray tracing and what allows us to transition that direction most efficiently holds a lot of weight in today's purchasing decisions.

To summarize if I could pay $10's/$20's more for Ryzen 3900X/3900XT over a Ryzen 5800X and upgrade my GPU down the road 2 or 3 GPU generations later and end up better RTRT results that's pretty important to consider because that's where graphics are headed and where the most concerning performance bottleneck will be in a lot of future games as time marches on. I guess what I'm getting at is if I had to buy one CPU and keep it for a decade, but still had the option to swap out the GPU for improvements which will end up more beneficial if I'm looking at ray tracing performance in regard to gaming and especially if gleaning more heavily at real time path tracing which obviously has it's work cutout, but keep inching closer at the same time.
 
This was tested with a 2080Ti a relevant question is if RTX 3090 or RNDA2 GPU starts to shift performance a little the other direction. Pairing a stronger GPU with a multi-core CPU that has more additional combined cache from additional physical cores is a fairly relevant consideration. Game engine and OS level improvements could improve overall multi-cores performance there are obviously limitations in area's, but there are also gains in area's as well. I general would agree with what's said and implied. There are interesting aspects here on the 5800X performance in different regions and lines of thinking. The Ryzen 3900X and 3900XT are $10's and $20's more while offering 4 additional cores. In some scenario's they win relative to the Ryzen 5800X in other area's they lose. I'd actually argue some of those area's have implications about the overall future gaming and where things are headed as well in regard to that.

Take blender for example with path tracing the 3900X/3900XT appear to have real benefits over the Ryzen 5800X and how that shakes out in regard to future GPU innovations is a bit up in the air if GPU's can utilize that edge for RTRT performance in scenario's where that matters perhaps there performance for gaming pulls out ahead or narrows the gap. Another interesting scenario is the compression and/or decompression results and it's a split with WinRAR Ryzen 5800X wins while in 7-Zip the 3900X/3900XT wins. I don't know how NTFS compression plays into things with both nor LZX or XPRESS 4K/8K/16K, but it's all actually fairly relevant information potentially to know about those differences. I'm curious how things will shake out as GPU's get closer and closer to more RTRT performance that more closely correlates with path tracing. I know path tracing and the way RTRT is being handled have distinctive differences for now anyway, but the real question is how they intertwine and how it relates to GPU innovation moving forward as well and in regard to multi-core performance as well and yes even compression/decompression that can impact things. If you're using a NVME drive as a fast storage device and it's not for the OS itself with write logging for example and rarely write the device and rely primarily on read performance I'd absolutely recommend enabling NTFS compression on the device and/or using LZX or XPRESS 4K/8K/16K compression on it for both the storage density gains from compacting the contents within the device as well as the bandwidth gains by shrinking them down in size.
The Ryzen 5800X show definite advantages to the Ryzen 3900X and 3900XT at lower cost though when more latency sensitive use cases came into play on the other hand so it's quite a mixed bag as to which is optimal and why and where some of the results could shift and morph over time to skew results and exaggerate things in favor of one direction versus the other. I forget now if the Zen 2 CPU's have support for infinity cache or not.

I'm curious of the Blender path tracing results times drop if you compress the test data files involved in the benchmark test with various compression methods or if it actually doesn't impact those results. I think that has real solid implications of where RTRT performance could head as GPU innovation improves in regard to multi-core performance. If the compression aspect becomes more critical to performance and favors more heavily leaning multi-core hardware rather than a slight frequency or IPC edge that's a consideration as well which offers me the best long term performance as opposed to what's best here and now. Intended use cases as well as projected future use cases aspects of consideration. I think most all of us agree path tracing is fantastic and we all wish that performance could be achieved in real time ray tracing at 60FPS + with resolutions we are able to play at. That would be quite amazing. I think the future of ray tracing and what allows us to transition that direction most efficiently holds a lot of weight in today's purchasing decisions.

To summarize if I could pay $10's/$20's more for Ryzen 3900X/3900XT over a Ryzen 5800X and upgrade my GPU down the road 2 or 3 GPU generations later and end up better RTRT results that's pretty important to consider because that's where graphics are headed and where the most concerning performance bottleneck will be in a lot of future games as time marches on. I guess what I'm getting at is if I had to buy one CPU and keep it for a decade, but still had the option to swap out the GPU for improvements which will end up more beneficial if I'm looking at ray tracing performance in regard to gaming and especially if gleaning more heavily at real time path tracing which obviously has it's work cutout, but keep inching closer at the same time.

Yes and 720p benchmarks are there just for the fun of it?
 
Yes and 720p benchmarks are there just for the fun of it?
Or like it's been hinted before, Tpu results are a bit odd. It's not the gpu, and it's not coming from the memory, CDH, used 3200mhz dimms with the same timings, but got better results...

1604594140796.png
 
Was actually expecting a bit better on the gaming side based on the marketing slides. I guess my next system will come down to value between the 10700k and 5800x.
 
Remember ryzen cpu always love tight timmings
and 5000 series supports 4000Ghz ram (2000 IF) let the tweaks start :)
 

Attachments

  • Opera Snapshot_2020-11-05_172014_www.youtube.com.png
    Opera Snapshot_2020-11-05_172014_www.youtube.com.png
    209 KB · Views: 304
  • Opera Snapshot_2020-11-05_172041_www.youtube.com.png
    Opera Snapshot_2020-11-05_172041_www.youtube.com.png
    252.8 KB · Views: 331
  • Opera Snapshot_2020-11-05_170552_www.youtube.com.png
    Opera Snapshot_2020-11-05_170552_www.youtube.com.png
    578.1 KB · Views: 290
  • Opera Snapshot_2020-11-05_162525_www.guru3d.com.png
    Opera Snapshot_2020-11-05_162525_www.guru3d.com.png
    48.4 KB · Views: 283
  • Opera Snapshot_2020-11-05_162600_www.guru3d.com.png
    Opera Snapshot_2020-11-05_162600_www.guru3d.com.png
    34.6 KB · Views: 288
  • Screenshot 2020-11-05 170142.png
    Screenshot 2020-11-05 170142.png
    706.6 KB · Views: 278
  • Opera Snapshot_2020-11-05_165619_www.youtube.com_LI.jpg
    Opera Snapshot_2020-11-05_165619_www.youtube.com_LI.jpg
    2.9 MB · Views: 290
Nice.
Any word on the new chipsets?

I believe the zen 3 will be more powerful when all the CPUs are tested with the 3080/3090 and faster ram
3200 MHz is the highest memory speed supported in stock configuration. A comparison between products should be stock, unless you compare overclocked vs. overclocked.
 
Nice.
Any word on the new chipsets?


3200 MHz is the highest memory speed supported in stock configuration. A comparison between products should be stock, unless you compare overclocked vs. overclocked.
You are right
But using faster RAM overall tends to have a better effect on zen 3
It seems that zen 3 bandwidth starved
 
Does... it need to be? It's within 2% of the 10900k @720p. Some games it will be faster, some games it will be slower. Civ 6, Rage 2, Sekiro & Wolfenstien 2 for example, the 5800x is faster than the 10900k @720p. It's also $100 less than the 10900k.
It does when you claim so just a couple weeks before release, yes...
 
Could this be why it performs underwhelming? What is IF running at, would it help with a faster memory?
Seems so, every other reviewer is using 3600 CL16.
 
Erm... Having the same fps on average as a cpu that costs 300€ less doesn't mean winning. The way Lisa was bragging a few weeks ago i was expecting complete domination. This aint it.
odPjzKJ.jpg

Gamers Nexus
Average-f.png

Hardware Canucks

I won't link any more reviews for you. If you are interested, you check it, if you are not, there you go.

Shameful display.

7nm vs 14nm in gaming if having core parity 0 : 2

The only shameful thing here is your comment. :/

The issue with that techspot benchmark is that it's only showing the top cpu.
To their defense: Steven says that they will release all other CPUs reviews in a day and say that all perform nearly the same as the 5950X.
 
Last edited:
Seems so, every other reviewer is using 3600 CL16.
The impact on performance is irrelevant. A reference comparison should be stock.
Far too many buyers are lured into buying memory for overclocking and get unstable machines as a result, unstable either initially or gradually over time. Those of you who are not buying a computer for the purpose of overclocking, should run the memory at stock speeds, which is 3200 MHz at 1.2 V for Zen 3 (there are kits from both Kingston and Corsair capable of 3200 MHz with a JEDEC profile). Doing so will not only save you $50-100, but also save you a lot of headache, well worth it for a painless memory setup and long term stability, while sacrificing only a few percent performance. Memory overclocking is overclocking, and should be a conscious choice, and only done by those wanting to take that risk.
 
Also, and I know I've repeated it a dozen times already but I don't understand why AMD has the right (and not only that people somehow find a justification for that) to increase their prices so much.

They force people to buy the 5900X/5950X CPUs
You don't understand why a company has the right to set prices where they want? What's not to understand? It's a free market. They are well within their rights to set a price of their choice. If you think they are out of line, vote with your wallet. Don't buy it.

Nobody is forcing anyone to buy something.
 
Yeah not really happy about the price of the 5800X ($779.00NZD) here it's rather a steep increase from what my 3700X cost me (547.00NZD) for what is essentially a hotter more power hungry chip and not that large of perf boost either
 
Back
Top