• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Ryzen 3000 "Zen 2" BIOS Analysis Reveals New Options for Overclocking & Tweaking

Its not a misconception and there is a metric ton of data available for you to enjoy that underlines major gaps between Core and Zen at high refresh. What you are saying is that a clockspeed gap of 600-800mhz makes no difference... it does. Both in min and max FPS, and both matter a lot for high refresh rate gaming. In your example you are completely GPU limited, you are playing shooters at 1440p high settings. You are 'between 70 and 144'... that says just about nothing.
As a 2700x owner I agree. I seem to run into cpu bottlenecks before a gpu bottleneck even at 4k sometimes where a 9900k is just fine with a 2080 ti instead of a radeon vii and when streaming I essentially only have 4 cores to play with, dropping just cause 3 framerates from 80s to 60s or 100s to 80s depending on the area, so both corecount AND clockspeed are important nowadays for gaming. Will definitely get a 12-core 5ghz 3700x just for streaming, but it will also increase framerates when only gaming by a considerable margin.
 
Its not a misconception and there is a metric ton of data available for you to enjoy that underlines major gaps between Core and Zen at high refresh. What you are saying is that a clockspeed gap of 600-800mhz makes no difference... it does. Both in min and max FPS, and both matter a lot for high refresh rate gaming. In your example you are completely GPU limited, you are playing shooters at 1440p high settings. You are 'between 70 and 144'... that says just about nothing.
Yes that's true, but it matters most when pumping extremely high frame rates often over 150-160 where the CPU becomes the bottleneck
As you move down to the 100fps range the differences become much less worthwhile especially in 1440p or 4k. The only way you will see 1440p or 4k showing a bigger difference is if the GPU can basically run the game with ease and max it out to where the CPU starts to become the bottleneck. Though its important to note that this gap widens rather gradually as you move to higher FPS so that's another aspect people tend to ignore. its not like a cutoff where the load shifts to the CPU and that the higher IPC advantage suddenly appears.
But to address your main point, yes 700mhz is not a small amount by any means. Thats 16% higher than 2700x (4.3ghz) and on an extremely refined 14++++++ process that can sustain high clock speeds closer to advertised max. Also account for the other 4-5% IPC advantage that intel still has, and there you have at least a 20% single thread/per core advantage.
 
Its not a misconception and there is a metric ton of data available for you to enjoy that underlines major gaps between Core and Zen at high refresh. What you are saying is that a clockspeed gap of 600-800mhz makes no difference... it does. Both in min and max FPS, and both matter a lot for high refresh rate gaming. In your example you are completely GPU limited, you are playing shooters at 1440p high settings. You are 'between 70 and 144'... that says just about nothing.
No what I am saying is you have a many people claiming you can't game on Ryzen CPUs Period.
So I made a clear point that you can, regardless of Refresh Rate, regardless if its 1080p, 1440p or 2160p. What ever is possible on Intel CPUs are possible on Ryzen CPUs. You may not get identical frame rates, but surely they are more than playable on Ryzen CPUs. That is the Misconception I am talking about. I can care less about metric ton of data available, because I speak of facts.

I took myself as an example, because people (Intel Fanboys) called me a liar in a couple other forums for claiming my particular setup (1700X & RX 580 8GB) was incapable of running 1440p even at moderate to low picture quality settings. :roll: When I told them I averaged around 70 to 144 FPS on Ultra High Settings, the trolling started of course.
NOW Do you see the utter nonsense being spread across the internet?
 
No what I am saying is you have a many people claiming you can't game on Ryzen CPUs Period.
So I made a clear point that you can, regardless of Refresh Rate, regardless if its 1080p, 1440p or 2160p. What ever is possible on Intel CPUs are possible on Ryzen CPUs. You may not get identical frame rates, but surely they are more than playable on Ryzen CPUs. That is the Misconception I am talking about. I can care less about metric ton of data available, because I speak of facts.

I took myself as an example, because people (Intel Fanboys) called me a liar in a couple other forums for claiming my particular setup (1700X & RX 580 8GB) was incapable of running 1440p even at moderate to low picture quality settings. :roll: When I told them I averaged around 70 to 144 FPS on Ultra High Settings, the trolling started of course.
NOW Do you now see the utter nonsense being spread across the internet?
Was a bit random though. Maybe lead with that next time?
 
No what I am saying is you have a many people claiming you can't game on Ryzen CPUs Period.
So I made a clear point that you can, regardless of Refresh Rate, regardless if its 1080p, 1440p or 2160p. What ever is possible on Intel CPUs are possible on Ryzen CPUs. You may not get identical frame rates, but surely they are more than playable on Ryzen CPUs. That is the Misconception I am talking about. I can care less about metric ton of data available, because I speak of facts.

I took myself as an example, because people (Intel Fanboys) called me a liar in a couple other forums for claiming my particular setup (1700X & RX 580 8GB) was incapable of running 1440p even at moderate to low picture quality settings. :roll: When I told them I averaged around 70 to 144 FPS on Ultra High Settings, the trolling started of course.
NOW Do you see the utter nonsense being spread across the internet?

People say a lot of things, but I can't say the common thing is 'you can't game on Ryzen period'... sorry to burst your bubble. I dó remember a fierce rumor when the 7700K was released that Ryzen offered 'better minimums' and 'better frame pacing'. Not supported by actual data though. After that the bench-dust settled and people concluded that at anything over 60 FPS, Ryzen 1 was not ideal, being limited by clocks and requiring fast memory to really extract performance. Ryzen 1.5 marginally improved on this. I personally expect the next iteration to destroy Intel's gaming dominance in terms of performance, closing the gap and offering more of everything on top. Ironically half of that has to do with Intel's performance gain grinding to a complete halt.

The reason you may see that alot is because of the places you visit, enthusiast forums and especially gamers will favor Intel. Don't mistake a niche for mainstream. In mainstream, people barely have an idea what they want, and if someone they trust tells them Ryzen is good, they go with it, just as they would go with someone telling them Intel's the way.

There is also of course the brand awareness, that is overall a bit higher for Intel. Uphill battle sure, but not the way you put it.

But another thing. There are also examples, and they are not few and far between, where you actually need that fast Intel CPU with an OC. This goes for games like Total War, many strategy games that go towards endgame, but also builder/survival games with huge worlds and any (older) title that leans heavily on single thread. MMO's are another good example of games that really like every Mhz you throw at them. It pays off bigtime and Ryzen actually does fall short - it will dive under 60 FPS faster and more readily. This is further exaggerated by the problem of the last decade, which is that GPUs have progressively become faster every gen with as much as 30%; while CPU performance stagnated entirely on single thread. This increases the need for a top-end CPU to support fast GPUs.

Yes that's true, but it matters most when pumping extremely high frame rates often over 150-160 where the CPU becomes the bottleneck
As you move down to the 100fps range the differences become much less worthwhile especially in 1440p or 4k. The only way you will see 1440p or 4k showing a bigger difference is if the GPU can basically run the game with ease and max it out to where the CPU starts to become the bottleneck. Though its important to note that this gap widens rather gradually as you move to higher FPS so that's another aspect people tend to ignore. its not like a cutoff where the load shifts to the CPU and that the higher IPC advantage suddenly appears.
But to address your main point, yes 700mhz is not a small amount by any means. Thats 16% higher than 2700x (4.3ghz) and on an extremely refined 14++++++ process that can sustain high clock speeds closer to advertised max. Also account for the other 4-5% IPC advantage that intel still has, and there you have at least a 20% single thread/per core advantage.

Another aspect people tend to forget is that you also get higher FPS across the board out of your GPU at all levels of performance. Its minor, but its there. While your average FPS may well be above 100, the minimums never are, and having a lot headroom counts in those situations where the FPS takes a nosedive. This effect gets greater with faster GPUs - so even at 4K you will see an advantage from more CPU grunt.
 
Last edited:
funny thing is that up to 2008 the trend was reverse: they both intel and AMD developed integrated memory controllers with nahaylem and phenom
amd has first imc was in athlon64 (s754)
 
amd has first imc was in athlon64 (s754)
With the more popular version on S939.
Socket 754 was a weird one though, like it was a test for the 1st CPU to integrate the Memory Controller. Or the budget socket which serves people quite well due to the lower prices.
Socket 754
was the original socket for AMD's Athlon 64 desktop processors. Due to the introduction of newer socket layouts (i.e. Socket 939, Socket 940 andSocket AM2), Socket 754 became the more "budget-minded" socket for use with AMD Athlon 64 or Sempron processors.
 
I have a R5 1600X @ 4ghz / Vega 56, and I get 160 ~ 220 fps in 1080p very low graphics in the game Quake Champions ....
Vega 56 has 70% usage, while my cpu has 50 ~ 60 usage.
I want to get 240fps constant for sync and low inputlag with my 240hz monitor.
I am hopeful that this zen2 R7 3700X will push millions of data to my Vega, and I can get 240fps fluid!
:nutkick:
 
With the more popular version on S939.
Socket 754 was a weird one though, like it was a test for the 1st CPU to integrate the Memory Controller. Or the budget socket which serves people quite well due to the lower prices.
Socket s754 and s939 were for mainsteam and enthusiast processors respectively. There was no real technical difference other than HT speed and faster processor models for 939. For some reason s754 did not turn out too well...
 
Socket s754 and s939 were for mainsteam and enthusiast processors respectively. There was no real technical difference other than HT speed and faster processor models for 939. For some reason s754 did not turn out too well...
You forgot the biggest difference; dual channel memory.

Socket 754 didn't have an appealing selection of processors after socket 939 arrived.
 
You forgot the biggest difference; dual channel memory.

Socket 754 didn't have an appealing selection of processors after socket 939 arrived.
Oh I forget about that one, think s754 was Single Channel correct?

Anyhow, since we are on the topic of ZEN2, I found this rather silly image off Google Images when I searched.

ZEN2_CHIPLET8.png
And Found This too. What can we confirm about this one? Is there any truth to it. The interesting part is that the PCIe will be on each chiplet it seems and with a 100-150 GB/s for Infinity Fabric 2.
AMD-3rd-Gen-Ryzen-chiplet.jpg
 
I didn't want to sound like I was complaining. lol
You probably know this site, quite popular to compare stuff.
UserBenchmark

No you are not complaining.
It is the same thing as " Nvidia has STABLE graphics driver".
And something like:
" Memory problems on Intel platform? Memory problem.
Memory problems on AMD platform? AMD problem. "

These kind of misconceptions are floating around and never seems to stop.
 
You forgot the biggest difference; dual channel memory.
You are right, I did forget that. I had the nagging feeling that I was not remembering something important :)

And Found This too. What can we confirm about this one? Is there any truth to it. The interesting part is that the PCIe will be on each chiplet it seems and with a 100-150 GB/s for Infinity Fabric 2.
View attachment 119584
IO die:
- DDR4 controllers, IF2 are confirmed.
- Latency issues are claimed to be resolved, we will have to see.
- IF2 is tied to memory clock but it does have divider now.

Chiplet:
- IF2 yes.
- 4 cores per CCX, not 8.
- PCI-e per CCX is unlikely.

Edit:
The first image is messy and there are too many IF links. 5 links per die? I would expect them to still go with three.
 
It could also be that the divider option is only there as an addition in the new AGESA; for Ryzen 1xxx and 2xxx chips and that it will be untied if a 3xxx CPU (i.e. with an I/O die) is installed, perhaps?

I don't see why they would need to have it tied if there is a cache in the I/O chip. It could be for latency/sync reasons I suppose...
 
It could also be that the divider option is only there as an addition in the new AGESA; for Ryzen 1xxx and 2xxx chips and that it will be untied if a 3xxx CPU (i.e. with an I/O die) is installed, perhaps?
These new AGESAs are out and the setting has not been seen to apply to Ryzen 1000/2000 AFAIK. The assumption it is meant for the newly added 3000 series seems to be appropriate.
 
will depend on IF speed and latency,wouldn't write it off just yet.
Zen 2 infinity fabric is supposed to be 2.3x faster, it also has a lot more cache and improved branch prediction/prefetch.
 
Zen 2 infinity fabric is supposed to be 2.3x faster, it also has a lot more cache and improved branch prediction/prefetch.
we'll see what we see once it's reviewed, "supposed to be 2.3x faster" means little coming from amd slides,or any company's word of mouth to be honest.
I don't think sheer bandwidth numbers will do the trick here if they mean more latency at the same time.
they've got 15% to make up to 8600k/9600k in gaming performance,they're not gonna do that with going 8c->12c and 4.2GHz->4.7/4.8GHz
 
Last edited:
I am anxiously waiting for Ryzen 2 to arrive to finally upgrade from my old Intel 2500K @4.8Ghz... I hope it lives to ours expectations!
 
we'll see what we see once it's reviewed, "supposed to be 2.3x faster" means little coming from amd slides,or any company's word of mouth to be honest.

I don't think sheer bandwidth numbers will do the trick here if they mean more latency at the same time.

they've got 15% to make up to 8600k/9600k in gaming performance,they're not gonna do that with going 8c->12c and 4.2GHz->4.7/4.8GHz
Yeah, these estimates are usually edge-cases, putting the new product in the best possible light, and as always, good reviews will reveal the true performance.

Different workloads have different performance characteristics, and therefore also different effects from architectural improvements. Zen 2 offer several improvements, including front-end changes, doubling of float throughput(AVX workloads), and more. Gaming performance will probably not benefit a lot from improvements in Infinity Fabric, nothing from the increased float throughput, perhaps some from cache improvements, but a lot from frond-end changes (if they are substantial). For branching heavy code like gaming, front-end changes can sometimes even help more than the average performance gain.

There is one important performance characteristic about gaming performance though; the CPU just needs to be fast enough not to bottleneck the GPU, so scaling forever here is actually pointless. We see it clearly with Skylake-based CPUs; for every 100 MHz beyond 4 GHz, the gains are decreasing. And even some of the lower clocked Coffee Lakes do very well vs. higher clocked Zen+ in gaming. But this is actually good news for AMD, as they don't have to be completely on par with Intel to do a good job. I would say if they get the performance gap in gaming down to 2-3% it will be perfectly fine for normal enthusiasts, but if it's in the range of 8-10%, then combining these CPUs with an expensive GPU will quickly be a waste of money.

One quick note on core count. When Zen 2 arrives, I expect we get another wave of "but this have more cores, so it must be better (in the long run)". For synchronous workloads like gaming, more cores will not compensate for slower cores, and that's not going to change anytime soon.
 
Improvements in RAM speed (and maybe latency is also better) may also help, again with diminishing returns, of course.
 
Memory timings and latency on both Ryzen and Skylake X have massive massive real life effects, esp if you can get your timings very tight

Do a little Google searching, you’ll find it’s up to 20-30% in some games, esp min FPS

yhe, it does sounds like they again trying to "fix" a latency problem with speed, but the true is I have DDR3 and it gives me 20-25 giga speed and nothing is remotely limited by speed and I have only a dual channel, people with 4 channels show that it gived them nothing (outside bechmarks), what good will 100 giga do if nothing is limited by speed anyway? but math checks out: more cores, if they all working could probably use more speed from memory but this is only good for the programs that do large data sets on many cores at the same time, games on the other hand don't :x it is whay core 2 duo was mutch faster per clock for games: fast access to large L2 cache (6 mega in 15 clocks) and even to this day nothing beats it at that (haswell, which is pretty much the same cores intel using today with latest generations (only with DDR4 controller) has 30 clocks penalty for access to his 8 mega cache, so to compensate for this very high latency, intel added a 256K dedicated cache (which is 12 clocks) in the hope that it will help (it probably does, for smaller data sets of corse).

if the IF still has big latency, this are a processors that are going to be good for heavy duty things with large data sets, you will can play games on them but probably not as high performance as intel (round 2 of low 1080p performance on zen).

but this is all theories nothing is know right now, and I also hope that outside the increase speed the latency this time will also be good.

it strange but seems to me that all industry is going in the same direction: DDR4 higher latency than DDR3, haswell processors more latency over core 2 duo, AMD more latency in cache and memory etc
funny thing is that up to 2008 the trend was reverse: they both intel and AMD developed integrated memory controllers with nahaylem and phenom
 
https://wccftech.com/asus-x570-motherboards-next-gen-amd-ryzen-3000-cpus-leak-out/
I think that now MB's Vendor's know the potential in AM4 boards and they will make even more models with great futures.
P.S- why no single Gigabyte board got the new Bios?

 
IMO they might be wise to wait until it gets a bit more testing, the 0.0.7.2 for my Taichi Ultimate is an utter disaster.

I see 1.0.0.1 or 2 appearing on other boards, so I think folks should wait for that one at least.
 
Back
Top