• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9 and Core i7 "Rocket Lake" Lineup Leaked, Claims Beating Ryzen 9 5900X

Joined
Jan 27, 2015
Messages
1,649 (0.49/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
In these special times, this may be a factor, for sure.
But also remember that those who buy the more powerful CPUs generally have some more demanding workflows than Excel and a web browser, many of which are GPU accelerated. Or perhaps Farmville has become more sophisticated than I remember?


Probably about as many as there are heavy users of Cinema 4D in this forum, and judging by the obsession about obscure edge-case benchmarks there must be thousands of them in this forum alone. :p

This kind of thinking is what I was referring to in post #68; most of you should be focusing on i5-11600K vs. 5600X. There are very few real world non-server workflows which scales beyond 6-8 cores, yet so many buy 12 and 16 core CPUs, truly just for bragging rights, and then use "future-proofing" as an excuse to justify the purchase. (Well of course there are a handful who actually use the cores, I'm not talking about those)

Completely agree with that. I've run perfmon on my box multiple times for an entire day, and at the most intense CPU use times during those measurements I would characterize the usage as 'lightly threaded' 'single core limited'. 80% of the time, it's near idle. i.e., like right now. typing this.

So really, at 6 cores, I'm just interested in the single core performance of the new chips. I would bet if people would run the same analysis of their usage, without borking the results by running a benchmark or some other artificial load, they'd find they have the same usage pattern.
 
Joined
Feb 22, 2017
Messages
26 (0.01/day)
Completely agree with that. I've run perfmon on my box multiple times for an entire day, and at the most intense CPU use times during those measurements I would characterize the usage as 'lightly threaded' 'single core limited'. 80% of the time, it's near idle. i.e., like right now. typing this.

So really, at 6 cores, I'm just interested in the single core performance of the new chips. I would bet if people would run the same analysis of their usage, without borking the results by running a benchmark or some other artificial load, they'd find they have the same usage pattern.
That's quite different from my measurements. Chrome with 40 tabs and some of the pages are just so poorly written, image processing, etc all taxing it in the foreground. Then in the background Plex server, Resilio Sync, etc which sometimes just decide to fully utilize a core or more and I'm too lazy to hunt them down. I got rid of antivirus but many folks still keep them and those programs also tend to have similar behavior. Way too many things to keep CPU at idle.

That's without considering any real professional uses, which are far more than just POV-RAY. As a software developer I both fully utilize CPUs/GPUs myself and deal with demanding programs others write, also including the IDEs that can be surprisingly taxing those days.
 
Joined
Mar 16, 2017
Messages
1,665 (0.64/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
The battle is all going to depend on pricing, and the noted GPU pricing issue as well.

If you are coming from say a Haswell era PC, you want to upgrade it today. You're not a gamer, you've used iGPU. You buy a 5600X. Guess what?

You need to buy a GPU. Even if you're just playing facebook games or doing Excel, you have to.

Have you seen what a 'cheap' GPU costs these days? When you could buy a $30 1030 or $60 1050 it was no big deal, not worth mentioning, but these days? To get anything like the Xe GPU, you'll be spending north of $150. And you'll have to look. The other option is to get something really, really, really old and obsolete for ~$50. Those cards won't even play a modern YouTube or Netflix stream reasonably well.

So this doesn't matter for most folks here in these forums as they already have GPUs, so they don't think about it. But it most certainly does affect the pricing of the OEM rigs, pre-builts, and non gamer DIY rigs. That's going to be a big problem for AMD in the desktop space.
Just out of curiosity, I checked Microcenter this morning for GPU stock. They had 2 RX570s in stock, for $350 each! A couple years ago, I paid a little over $100 for the same card. I checked the stock again just now, and they are now down to 1 card. I just built a system about a month back, and I'm glad my GPU needs are minimal. I can live with Intel graphics just fine (good Linux support, really). What a strange time we live in.
 
Joined
Jan 27, 2015
Messages
1,649 (0.49/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
That's quite different from my measurements. Chrome with 40 tabs and some of the pages are just so poorly written, image processing, etc all taxing it in the foreground. Then in the background Plex server, Resilio Sync, etc which sometimes just decide to fully utilize a core or more and I'm too lazy to hunt them down. I got rid of antivirus but many folks still keep them and those programs also tend to have similar behavior. Way too many things to keep CPU at idle.

That's without considering any real professional uses, which are far more than just POV-RAY. As a software developer I both fully utilize CPUs/GPUs myself and deal with demanding programs others write, also including the IDEs that can be surprisingly taxing those days.

So I've been a developer for 30 years, and I'm a lead these days so I regularly have developers working for me on projects.

I'm not too impressed with a developer who runs 40 tabs on Chrome, Plex, an image editor (why, are you also a visual artist?), some 3rd party file sync app, and then removes virus scan from their PC where they do development work.. with that file sync running and automating virus spread... That's *dumb*. You'd get fired quick in most organizations.

There are edge cases and typically they are visual design and media artists. Most developers are neither. Your scenario is not credible.

Just out of curiosity, I checked Microcenter this morning for GPU stock. They had 2 RX570s in stock, for $350 each! A couple years ago, I paid a little over $100 for the same card. I checked the stock again just now, and they are now down to 1 card. I just built a system about a month back, and I'm glad my GPU needs are minimal. I can live with Intel graphics just fine (good Linux support, really). What a strange time we live in.

What's crazy with Microcenter, none - zero - of the GPUs for sale can be reserved. You have to go to the store, and the on hand quantities are low.

But with those prices, what's the point of getting a new CPU? Who's gonna go buy a $300 5600X so they can pair it with a $350 RX 570?

What's even crazier about such a pair up, there's this review site called Vortez, that has like the oldest test components on the planet. All his reviews he uses an RX 480, some old Corsair LPX 3Ghz DDR4, an an old 128GB Intel SSD I've never heard of

On that setup, guess which CPU is the fastest at Tomb Raider?

10900K? nope. 5800X? Nope. 5600X? Nope.

Zen 1.5 2600X.

And an 8700K beats everything at Total War:Warhammer at 1080p with that setup - beats comet lake, and Zen 3.

So, entirely possible, someone getting a 5600X for gaming but stuck with one of those old GPUs, may well actually get lower FPS than the guy that went cheap on the CPU.
 
Joined
Feb 22, 2017
Messages
26 (0.01/day)
So I've been a developer for 30 years, and I'm a lead these days so I regularly have developers working for me on projects.

I'm not too impressed with a developer who runs 40 tabs on Chrome, Plex, an image editor (why, are you also a visual artist?), some 3rd party file sync app, and then removes virus scan from their PC where they do development work.. with that file sync running and automating virus spread... That's *dumb*. You'd get fired quick in most organizations.

There are edge cases and typically they are visual design and media artists. Most developers are neither. Your scenario is not credible.
I'm a tech lead with some projects on security (admittedly not my main area). Guess what? I have not contacted a single virus in my career and nowhere close to be fired.

If you are well-versed in this area as you claim, you should know that AVs are mostly another closed source program to consume more resources with potential privacy issues. Good downloading habits, encryption, authentication, VMs, etc are much more important. And you should know that file sync won't spread virus more unless it's also executed (or at leat opened if document processor has vulnerability) elsewhere.

I have to note that this is my personal computer not the corporate one. I basically don't even care even if this computer is infected. Everything is backed up with snapshots and multiple copies, and everything important is encrypted with AES and randomly generated passwords. If you are talking about corporate computers, the companies I worked in don't deploy commerical AV solutions (maybe due to privacy concerns. But do use related technologies including checksum databases). Your 30-year experience might differ.

Also an ameteur photographer with recent focus on astro (DeepSkyStacker etc). No idea why it's not credible but up to you to decide. :)
 
Last edited:
Joined
Jun 10, 2014
Messages
2,900 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
When one plans for cooling, MB, and PSU, one should go with the worst case power consumption to avoid accidentally frying the components or shutdowns etc, not the ideal "main use case" average.

And that's why they focus on those. It's fairly relevant.
Yes, it is relevant for PSU, but not for cooling, noise etc.
But having a little margin over sustained load is good anyways, as the PSU lasts longer then, so the difference in PSU recommendations are usually not changed because of a little higher peak power.


Completely agree with that. I've run perfmon on my box multiple times for an entire day, and at the most intense CPU use times during those measurements I would characterize the usage as 'lightly threaded' 'single core limited'. 80% of the time, it's near idle. i.e., like right now. typing this.
Well, of course there is little difference during idle, the question is the characteristics during a realistic load. And the reality is that even for most power users, it's generally "medium threaded" at most, because synchronized workloads don't scale perfectly. And "running many things at once" is not a good excuse either, because it usually runs into other bottlenecks long before core count. Even when I try to stress out my 6-core with a real workload; with a couple of VMs, compiling a decent sized project and running a bunch of tabs in a browser, it rarely go much over 50% load, and runs into scheduling bottlenecks long before maxing out the cores.

Chrome with 40 tabs and some of the pages are just so poorly written, image processing, etc all taxing it in the foreground. Then in the background Plex server, Resilio Sync, etc which sometimes just decide to fully utilize a core or more and I'm too lazy to hunt them down. I got rid of antivirus but many folks still keep them and those programs also tend to have similar behavior. Way too many things to keep CPU at idle.
Chrome spawns an incredible amount of threads, up to one per CPU thread for each tab it seems, that quickly adds up to hundreds of threads. This will often "overload" the OS scheduler, and can cause serious stutter (on the desktop) even though there is barely any load on the CPU. Unless those threads are all videos or something, you are more likely to experience this kind of problems before running out of CPU cores from web browsers. And having many more CPU cores means Chrome will spawn even more worker threads for each tab, bothering your scheduler just even more.

As a software developer I both fully utilize CPUs/GPUs myself and deal with demanding programs others write, also including the IDEs that can be surprisingly taxing those days.
There are certainly many IDEs and text editors being very heavy these days, but very few if any of them get much better with more threads. Especially the non-native ones are horrible, like Eclipse has always been painfully sluggish, but even the JavaScript text-editors like Atom and VS Code which are so popular these days are incredibly laggy and unreliable, to the point where it can miss/misinterpret key presses. These problems are mostly due to cache misses, and there is little to do about that other than writing native cache optimized code. It's sad that these tools are worse than the tools we had back in the 90s.

I'm personally "allergic" to lag when typing code, there is little that bugs me more than an unresponsive text editor (or IDE). I can deal with operations being slow, but unresponsive typing and button clicks becomes a constant annoyance, and leads to misinterpretation of input plus loss of focus. I did in fact write most of my code (at work and home) in Gedit from ~2009-2015, because I'll choose a plain text editor over sluggish "IDE features" any day, I only eventually switched to Sublime due to newer Gedit being buggy. Still today I stick to a "plain" text editor whenever possible, that plus a dropdown terminal, tmux, grep and a few other specific tools and I'll "beat" an IDE in productivity any day, and I'm saying that as someone who was a "big IDE guy" in the 90s and early 2000s.

So I've been a developer for 30 years, and I'm a lead these days so I regularly have developers working for me on projects.
I'm not too impressed with a developer who runs 40 tabs on Chrome, Plex, an image editor (why, are you also a visual artist?),<snip>
There are edge cases and typically they are visual design and media artists. Most developers are neither. Your scenario is not credible.
While I have no reason to doubt your evaluation of your fellow developer, I seriously doubt his/her lack of professionalism has much to do with the specifics you mentioned, or at least it depends on how he/she uses the tools. So more context is needed to justify such claims.

While it may be less common that programmers don't possess graphical skills, it can certainly be a useful asset. I consider myself an old school programmer, yet have basic image editing and drawing skills, which has been useful quite often, for mockups, documentation or graphical elements in software.

And don't you dare judge people based on tab usage :p

What's even crazier about such a pair up, there's this review site called Vortez, that has like the oldest test components on the planet. All his reviews he uses an RX 480, some old Corsair LPX 3Ghz DDR4, an an old 128GB Intel SSD I've never heard of

On that setup, guess which CPU is the fastest at Tomb Raider?

10900K? nope. 5800X? Nope. 5600X? Nope.

Zen 1.5 2600X.

And an 8700K beats everything at Total War:Warhammer at 1080p with that setup - beats comet lake, and Zen 3.

So, entirely possible, someone getting a 5600X for gaming but stuck with one of those old GPUs, may well actually get lower FPS than the guy that went cheap on the CPU.
It is very unlikely that an old game will scale better with an old CPU. There are no ISA differences or fundamental scaling characteristics between these CPUs to exploit in "optimizations".
This is probably just a bad test with poor testing procedures. :rolleyes:
 
Joined
Jan 27, 2015
Messages
1,649 (0.49/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
While I have no reason to doubt your evaluation of your fellow developer, I seriously doubt his/her lack of professionalism has much to do with the specifics you mentioned, or at least it depends on how he/she uses the tools. So more context is needed to justify such claims.

Not really. The entire scenario looked contrived by someone who doesn't do development, a red herring. It's possible he has some edge case, but it's certainly not typical. Developers who eschew basic security like virus scan endanger their entire team (their livelihood) and the organization they work for. It's a real good way for a software engineer to wind up managing a Gamestop.

It is very unlikely that an old game will scale better with an old CPU. There are no ISA differences or fundamental scaling characteristics between these CPUs to exploit in "optimizations".
This is probably just a bad test with poor testing procedures. :rolleyes:

There are a lot of reasons that can happen, which have more to do with optimizations of the hardware/software combination. Just because you have a newer CPU, does not mean it is faster at everything when you get down to machine code levels. And a card like the 480, AMD has probably have not spent significant time to optimize the drivers for Zen 2 and Zen 3.

Which gets back to the original point, if you're having to buy older hardware, you may not get the expected results with new CPUs.

Check this out as example. There are some instructions where a Phenom II has 4X the "IPC" of Skylake:

 
Joined
Feb 22, 2017
Messages
26 (0.01/day)
Great. Finally a reasonable fellow instead of someone who blanket disqualifies others based on questionable logic.

There are certainly many IDEs and text editors being very heavy these days, but very few if any of them get much better with more threads.
That's true. However in my case I'm just sometimes launching multiple VS Code and JetBrains IDEs to work on multiple repositories, so some simple multi-processing.

While it may be less common that programmers don't possess graphical skills, it can certainly be a useful asset. I consider myself an old school programmer, yet have basic image editing and drawing skills, which has been useful quite often, for mockups, documentation or graphical elements in software.
Yeah. For me as mentioned above, mainly using image processors as an amateur photographer. It's true that Capture One etc doesn't scale well with threads, but some programs like DeepSkyStacker does. Not sure why his/her idea of a developer is so narrow in hobbies.

While I have no reason to doubt your evaluation of your fellow developer, I seriously doubt his/her lack of professionalism has much to do with the specifics you mentioned, or at least it depends on how he/she uses the tools. So more context is needed to justify such claims.
Fine. As I mentioned, I removed AV on my personal machine due to privacy concerns (won't affect my corporate machine at all except for dead severe macOS LAN vulnerabilities like the very few Windows had. Not very likely and less threat than an attack initiated from kids' machines in that perspective). The corporate machine does have binary checksums etc but the company doesn't deploy commercial AVs either. I see no issue with that as comprehensive access controls, isolation, and encryption etc are deployed on the machines and corporate network. I have seen many developers with much worse security practices, and not sure if they are all fired from the projects he/she leads lol.

Your further feedbacks, such as regarding lack of professionalism, are certainly welcomed.
 
Last edited:
Joined
Dec 28, 2012
Messages
3,478 (0.84/day)
System Name Skunkworks
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software openSUSE tumbleweed/Mint 21.2
So I've been a developer for 30 years, and I'm a lead these days so I regularly have developers working for me on projects.

I'm not too impressed with a developer who runs 40 tabs on Chrome, Plex, an image editor (why, are you also a visual artist?), some 3rd party file sync app, and then removes virus scan from their PC where they do development work.. with that file sync running and automating virus spread... That's *dumb*. You'd get fired quick in most organizations.

There are edge cases and typically they are visual design and media artists. Most developers are neither. Your scenario is not credible.



What's crazy with Microcenter, none - zero - of the GPUs for sale can be reserved. You have to go to the store, and the on hand quantities are low.

But with those prices, what's the point of getting a new CPU? Who's gonna go buy a $300 5600X so they can pair it with a $350 RX 570?

What's even crazier about such a pair up, there's this review site called Vortez, that has like the oldest test components on the planet. All his reviews he uses an RX 480, some old Corsair LPX 3Ghz DDR4, an an old 128GB Intel SSD I've never heard of

On that setup, guess which CPU is the fastest at Tomb Raider?

10900K? nope. 5800X? Nope. 5600X? Nope.

Zen 1.5 2600X.

And an 8700K beats everything at Total War:Warhammer at 1080p with that setup - beats comet lake, and Zen 3.

So, entirely possible, someone getting a 5600X for gaming but stuck with one of those old GPUs, may well actually get lower FPS than the guy that went cheap on the CPU.
TR is an anomoly. The vast majority of games will run FAR faster on a 5600x then any ryzen 2000 series CPU.

As for warhammer, well yeah, warhammer is single thread limited and loves the GHz, so intel still has an advantage here. Comet lake has trouble maintaining its higher boost clocks and has higher core latency then coffee lake had, which is likely holding something else back.

But the vast majority of games will favor the newest hardware.
 
Joined
Jun 10, 2014
Messages
2,900 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Developers who eschew basic security like virus scan endanger their entire team (their livelihood) and the organization they work for. It's a real good way for a software engineer to wind up managing a Gamestop.
I was actually not talking about security policies, I focused on the other part of your argument.
Having good security policies is useful, but making ridiculous and pointless policies results in people not respecting them. I've seen a lot in this category from previous employers; including requiring everyone to change passwords every three months, yet having a common password for every critical internal service. Or a "security expert" suddenly wanting their Linux workstations to install AV… And some even more ridiculous ones with possible exploitations, which I'm not going to mention specifics about because they are likely still in place.

There are a lot of reasons that can happen, which have more to do with optimizations of the hardware/software combination. Just because you have a newer CPU, does not mean it is faster at everything when you get down to machine code levels. And a card like the 480, AMD has probably have not spent significant time to optimize the drivers for Zen 2 and Zen 3.

Which gets back to the original point, if you're having to buy older hardware, you may not get the expected results with new CPUs.

Check this out as example. There are some instructions where a Phenom II has 4X the "IPC" of Skylake:

I know about it, and it's a curiosity, but beyond that not applicable to implementing whole applications to run faster on older hardware.

That's true. However in my case I'm just sometimes launching multiple VS Code and JetBrains IDEs to work on multiple repositories, so some simple multi-processing.
Multiple editors or IDEs is certainly common, I do it all the time.
But do they cause serious load on many cores simultainiously though?

Yeah. For me as mentioned above, mainly using image processors as an amateur photographer. It's true that Capture One etc doesn't scale well with threads, but some programs like DeepSkyStacker does. Not sure why his/her idea of a developer is so narrow in hobbies.
But does any of these tools scale significantly beyond 6-8 core though?

Your further feedbacks, such as regarding lack of professionalism, are certainly welcomed.
I'm sorry, perhaps we miscommunicated? I thought RandallFlagg was referring to a college, perhaps I mixed two things together, my head is too tired, I certainly wasn't referring to you. Anyway, I meant to say those things mentioned were not signs lack of professionalism. :)
(And I do agree about AV)
 
Joined
Jan 29, 2021
Messages
1,747 (1.48/day)
Location
Alaska USA
Joined
Feb 22, 2017
Messages
26 (0.01/day)
But does any of these tools scale significantly beyond 6-8 core though?
DeepSkyStacker scales well with 32 threads as it processes multiple shots. For more mainstream tools - I haven't benchmarked closely but should be when e.g. bulk editing multiple photos. Many operations for a single photo can also be parallelized but not right now. I guess those devs are just not incentivized to work on it.

I had numerous occasions when each VS Code window consumes a full core or so. So like 3 cores and not saturating 6-core or more but nowhere idle.

I'm sorry, perhaps we miscommunicated? I thought RandallFlagg was referring to a college, perhaps I mixed two things together, my head is too tired, I certainly wasn't referring to you. Anyway, I meant to say those things mentioned were not signs lack of professionalism. :)
(And I do agree about AV)
Ha. RF basically declared my scenario as fake because I removed AVs and do graphics related work in addition to dev :). Whatever. Not worth it to keep arguing with him/her.
 
Joined
Jan 15, 2021
Messages
337 (0.28/day)
I've seen an Anand tech review where the 5800 was way ahead of rocket lake on 720 and 1080p (like 30% more fps).
 
Joined
Jan 27, 2015
Messages
1,649 (0.49/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
I've seen an Anand tech review where the 5800 was way ahead of rocket lake on 720 and 1080p (like 30% more fps).

AT did a hack of a review using an older Beta BIOS, probably to get quick clicks. Cutress made a bunch of excuses for how that wouldn't make much difference, but newer information with more recent (still Beta) BIOS' is showing just how badly wrong AT was about the effect of using a Beta BIOS to start with.

They should have know that, I'd say they are either willfully ignorant to get clicks or just plain incompetent.

No reviews exist with a release BIOS yet, but here is an example of the difference it is making just in *newer* Beta BIOS versions.

This is AIDA memory read/write - the newer BIOS has a 50% increase in write speeds :

1615675606142.png



Y-Cruncher 500M - with updated BIOS its faster than 5800X, with older BIOS it's slower than a 3600X :

1615675748350.png


Enough to take it from a loss to a win compiling Firefox vs 5800X (+3.5% faster compile time improvement just from the newer BIOS) :
1615676168820.png
 

Attachments

  • 1615675668160.png
    1615675668160.png
    74.6 KB · Views: 53
Joined
Jun 10, 2014
Messages
2,900 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
AT did a hack of a review using an older Beta BIOS, probably to get quick clicks. Cutress made a bunch of excuses for how that wouldn't make much difference, but newer information with more recent (still Beta) BIOS' is showing just how badly wrong AT was about the effect of using a Beta BIOS to start with.

No reviews exist with a release BIOS yet, but here is an example of the difference it is making just in *newer* Beta BIOS versions.
Agree.
Pre-release reviews can be quite a bit off, even if they got their hands on final hardware, as the release firmware may still not be included, since the CPU was probably made months ago. Early firmware may lack optimal boosting and voltage adjustments. I see no reason to rush to judgement yet, let's wait at least for launch day reviews with the release BIOS and firmware. People are still asking us to cut some slack for Zen 3 BIOS and firmware (which released >4 months ago), I think we at least can give Intel until launch day. :rolleyes:
 
Joined
Jan 27, 2015
Messages
1,649 (0.49/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
Agree.
Pre-release reviews can be quite a bit off, even if they got their hands on final hardware, as the release firmware may still not be included, since the CPU was probably made months ago. Early firmware may lack optimal boosting and voltage adjustments. I see no reason to rush to judgement yet, let's wait at least for launch day reviews with the release BIOS and firmware. People are still asking us to cut some slack for Zen 3 BIOS and firmware (which released >4 months ago), I think we at least can give Intel until launch day. :rolleyes:

I don't really think too much of the tests AT does. They have very few real-world applications.

They also completely missed the bus (pun intended) on the 2:1 vs 1:1 memory configuration on Rocket Lake. This would be the same as crippling a Zen chip by making its infinity fabric run at odd ratios aka asynchronously.

See this article.

Anandtech's 11700k would have at default run at 1:2 asynchronous IMC mode since they tested at the official Intel spec of DDR4-3200 which would have negatively affected their latency-sensitive benchmarks such as gaming. Anandtech of course thought the Rocket Lake spec was 3200 so they tested stock which it is, but misleading. The actual stock setting is 3200 1:2. Oddly enough Intel also says it supports 2933 1:1 instead of 3200 1:2 which would have been much faster.

This explains the poor gaming performance from Anandtech's review. The 'default' DDR4-3200 is 1:2 out of the box. Which is extremely odd considering you can set it to 1:1 in the bios... Anandtech could have run it 1:1 to get better results but that would be non-stock i.e. overclock.

AT didn't even reveal what motherboard they used.

Basically, that AT review is worthless.
 
Joined
Jun 10, 2014
Messages
2,900 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
I don't really think too much of the tests AT does. They have very few real-world applications.

They also completely missed the bus (pun intended) on the 2:1 vs 1:1 memory configuration on Rocket Lake. This would be the same as crippling a Zen chip by making its infinity fabric run at odd ratios aka asynchronously.

See this article.



AT didn't even reveal what motherboard they used.

Basically, that AT review is worthless.
This subject deserves more investigation, even if it turns out that i7 and i5 models have some limitations in their memory controller. And if so, that's due to yields, not artificial segmentation.

Regardless, I don't put much faith in reviews from Anandtech, there are too many obvious tells that some of their reviewers have no idea of what they are doing. I looked at this workstation board review the other day, look at the forth paragraph under "Memory Testing: Limited QVL". The reviewer clearly don't understand what JEDEC SPD is, and assumes XMP will work on workstation boards. Such glaring incompetence questions the entire review.
 
Joined
Oct 27, 2009
Messages
1,133 (0.21/day)
Location
Republic of Texas
System Name [H]arbringer
Processor 4x 61XX ES @3.5Ghz (48cores)
Motherboard SM GL
Cooling 3x xspc rx360, rx240, 4x DT G34 snipers, D5 pump.
Memory 16x gskill DDR3 1600 cas6 2gb
Video Card(s) blah bigadv folder no gfx needed
Storage 32GB Sammy SSD
Display(s) headless
Case Xigmatek Elysium (whats left of it)
Audio Device(s) yawn
Power Supply Antec 1200w HCP
Software Ubuntu 10.10
Benchmark Scores http://valid.canardpc.com/show_oc.php?id=1780855 http://www.hwbot.org/submission/2158678 http://ww
In case anyone was wondering the config.

Pre production rocket lake, chip, board roms. Probably less than ideal performance.
1 version old bios on the amd board, but current agesa, no performance benefits till Jan.

Slow ram on both sides, 3200, 4x 8gb, would like to see cas details on these disclosures, while it could be the same kit, its a pretty big target for optimizations.
1615745149780.png

Looks like an honest representation of performance as it stood in Dec. With a bit stricter power capping deployed on the AMD system.
I look forward to the mixed reviews in 15 days.



As for Anand running things at 3200, intel did as well.
Very interesting that default at 3200 is 1:2 thats... bad. AMD can run 1:1 3600 without issue and on good chips 4000.
If the config is noted, the config is noted, though it does seem a bit of a sloppy review to not note that as the reason it was lagging behind as well as test 2933 1:1.
 
Joined
Jan 27, 2015
Messages
1,649 (0.49/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
As for Anand running things at 3200, intel did as well.
Very interesting that default at 3200 is 1:2 thats... bad. AMD can run 1:1 3600 without issue and on good chips 4000.
If the config is noted, the config is noted, though it does seem a bit of a sloppy review to not note that as the reason it was lagging behind as well as test 2933 1:1.

The only 'supported' speed for AMD 1:1 is 3200 though, ie 1600Mhz IF. Yes most of them can go faster. My 10400 can do 4.4 uncore. That's OC though.

Intel can undoubtedly run 1:1 on most RKL chips as well, and the Intel IMC in general has been able to greatly outperform AMDs Infinity fabric, which is in no small way responsible for the longevity of Skylake.

In fact, the 11900K reportedly is by default 1:1 at 3200. The lower SKUs are the ones that will reportedly run 1:1 only at 2933.

As far as AT, they could have improved their RKL results by simply running 2933 at 1:1. That is fully supported on all the RKL SKUs.

It looks like 3200 at 2:1 is an artificial limit, maybe Intel will get smart and remove that in the final BIOS updates, but we'll have to see when real reviews with final production BIOS start to come out. Also, reviews where their motherboard and complete config is known. Anand's secret benchmark platforms in this case and general lack of configuration information for their tests leave a bad taste.
 
Last edited:
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Joined
Jan 27, 2015
Messages
1,649 (0.49/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
Anandtech has an update, that includes 1:1 question and couple others:

I doubt most realize it, but they basically explained how going from one BETA BIOS to another BETA BIOS resulted in this :

"In our CPU performance tests, we are seeing an average +1.8% performance gain across all workloads. This varies between a -4.3% loss in some workloads (Handbrake) up to a +9.7% gain (Compilation). SPEC2017 single thread saw a bigger than average +3.4% gain, however SPEC2017 multi thread saw a -2.1% adjustment, making it slower."

They also had a 10% decrease in max power consumption.

Those variations in those workloads are huge. They are all over the board on performance just from a BIOS revision. It basically means their original "review" was meaningless, and their revised numbers using a newer BETA BIOS are also meaningless.

AT's approach to reviews is basically to use OEM settings on DIY hardware, their logic being that the majority of PC buyers buy OEM.

However if you are looking at an OEM setup you should be looking at PCWorld / CNET and so on - they have comprehensive reviews and the performance of an OEM system (surprise) has little to do with what brand of CPU you buy. It has to do with how the OEM set their rig up. Single channel RAM? Soldered RAM? Nerfed \ low power GPU? Nerfed BIOS? Super slow DDR4-2400 or 2133? They do all of these things and it affects OEM setups far far more than any specific hardware selection.

In contradiction to AT, and more relevant for DIY / Enthusiasts, we've also got stuff like this - which gives some idea of a typical enthusiast / DIY build can do. This is not OC, this is memory tweaked on both platforms with DDR4-3600 / 3800 CL 14:

1615758794012.png
 
Joined
Oct 31, 2020
Messages
78 (0.06/day)
Processor 5800X3D
Motherboard ROG Strix X570-F Gaming
Cooling Arctic Liquid Freezer II 280
Memory G Skill F4-3800C14-8GTZN
Video Card(s) PowerColor RX 6900xt Red Devil
Storage Samsung SSD 970 EVO Plus 250GB [232 GB], Samsung SSD 970 EVO Plus 500GB
Display(s) Samsung C32HG7xQQ (DisplayPort)
Case Graphite Series™ 730T Full-Tower Case
Power Supply Corsair RM1000x
Mouse Basillisk X Hyperspeed
Keyboard Blackwidow Ultimate
Software Win 10 Home
I doubt most realize it, but they basically explained how going from one BETA BIOS to another BETA BIOS resulted in this :

"In our CPU performance tests, we are seeing an average +1.8% performance gain across all workloads. This varies between a -4.3% loss in some workloads (Handbrake) up to a +9.7% gain (Compilation). SPEC2017 single thread saw a bigger than average +3.4% gain, however SPEC2017 multi thread saw a -2.1% adjustment, making it slower."

They also had a 10% decrease in max power consumption.

Those variations in those workloads are huge. They are all over the board on performance just from a BIOS revision. It basically means their original "review" was meaningless, and their revised numbers using a newer BETA BIOS are also meaningless.

AT's approach to reviews is basically to use OEM settings on DIY hardware, their logic being that the majority of PC buyers buy OEM.

However if you are looking at an OEM setup you should be looking at PCWorld / CNET and so on - they have comprehensive reviews and the performance of an OEM system (surprise) has little to do with what brand of CPU you buy. It has to do with how the OEM set their rig up. Single channel RAM? Soldered RAM? Nerfed \ low power GPU? Nerfed BIOS? Super slow DDR4-2400 or 2133? They do all of these things and it affects OEM setups far far more than any specific hardware selection.

In contradiction to AT, and more relevant for DIY / Enthusiasts, we've also got stuff like this - which gives some idea of a typical enthusiast / DIY build can do. This is not OC, this is memory tweaked on both platforms with DDR4-3600 / 3800 CL 14:

View attachment 192427
wait for official reviews
 

Attachments

  • Opera Snapshot_2021-03-09_231215_wccftech.com.png
    Opera Snapshot_2021-03-09_231215_wccftech.com.png
    448.3 KB · Views: 57
Joined
Jan 27, 2015
Messages
1,649 (0.49/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
Yeah not liking the preorder prices at all. At least not here at launch. Maybe a few weeks after launch these will come down.

Microcenter:

1615927260541.png
 
Joined
Jan 27, 2015
Messages
1,649 (0.49/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
Pricing is showing much better now. 11600K @ $269.

That 10700K at $249 is very tempting though, as is the 10900KF at $329.



1616195549136.png
 
Top