• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core i7-11700K "Rocket Lake" CPU Outperforms AMD Ryzen 9 5950X in Single-Core Tests

The big jump is in indeed in the AES-XTS test which does benefit from AVX512 acceleration. The 10700K scores around 3.07GBps in the single-core test, this 11700K around 9.24GBps. There is very little difference in the multi-core AES-XTS test scores which means that the AVX512 reduces the multi-core clock quite significantly, to the point where it is hardly worth it.

That one sub-test on its own lifts the single-core score by 10 percentage points, so the final score is fairly misleading. Rocket Lake is an improvement, but not a 30% improvement.

Zen3 also benefits from AES-XTS though: albeit the 256-bit version instead of the 512-bit version. In any case, the various CPU-manufacturers are pushing faster-and-faster AES every generation. Its clearly an important workload if this much effort is being shoved into it. I dunno if its more for servers or for clients. But both sides of "https" needs AES on every single connection. More efficient AES-instructions means more efficient compute, since its certainly a CPU-heavy load.

Ultimately, that's why it is important to understand these benchmarks. Everyone has an opinion on what is, or isn't, a "standard computer workload" these days. Knowing whether to emphasize something like AES-performance or Deep-learning instructions, or 512-bit vectors... or 128kB L1 cache (M1) or 512kB L2 cache (AMD Zen3 / Intel Cyprus Cove)... like we all can invent a benchmark to make our favorite CPU win every time. The question is what is the "standard" workload that we all agree is representative of reality?
 
Seriously, nobody believes that ST perf. gained 34% in between generation on the same 14++++++nm process.

Yea lol, more Intel math!
 
You must be one of these people who ask me about the gas mileage on my cars. :roll: Wild guess: I don't care. :nutkick:
I recently built 10850k, no OC just power limits removed, 320W+. I would have put just as beefy cooling on a Ryzen CPU if I wanted to get the best out of it. And that would be pulling well over 125W rated TDP.

P.s. The owner also didn't care about his electric bill. surprise! surprise!
Over the many years i have owned computers i noticed they get hotter and hotter, i too don't care about power consumption but now it's just ridiculous how hot the cup's get and how big the videocards got , when i switched from a 3570k and some GTX Nvidia card i don't remember to ryzen 1800x and rx 480 i was shocked how hot the ACTUAL CASE GOT, i have an old school one all metal with optic drives bays.
This is unacceptable, requirements of 600W to 1000W power supplies and water cooling while apple makes a silicon ( apple M1 ) that consumes 5 to 20W of power, passively cooled in some laptop that can edit even 8k video files.
I am a video editor and i never thought this would be possible but i might ditch Windows for Mac if the trend is water cooling 200W cpu's and paying close to 1000$ for a video card ( this is the price for 3060ti in my country).
I don't understand how apple made that chip so powerful and consume so little power, and that 8k editing from canon R5 is mindblowing, you would need threadripper and rtx 2080 ti to work with those files on PC.
 
You must be one of these people who ask me about the gas mileage on my cars. :roll: Wild guess: I don't care. :nutkick:
I recently built 10850k, no OC just power limits removed, 320W+. I would have put just as beefy cooling on a Ryzen CPU if I wanted to get the best out of it. And that would be pulling well over 125W rated TDP.

P.s. The owner also didn't care about his electric bill. surprise! surprise!
I drive a tractor-trailer that gets 5-7mpg, my car is a Prius that gets 49mpg, and my boat is a sailboat that I've gotten close to 250mpg on diesel last summer.
1st one is my job, 2nd personal car, 3rd is my boat.
 
I don't understand how apple made that chip so powerful and consume so little power, and that 8k editing from canon R5 is mindblowing, you would need threadripper and rtx 2080 ti to work with those files on PC.
X86 architecture is too old. They should have been done long ago and implemented more efficiently architecture but too must software ecosystem is based on it.
 
Who needs the KFConsole when you've got one of these to roast your dinner tonight?

I'll pass thank you very much.
My wallet isn't into getting raped by Intel every chance they get to for a piece that, in the end still does the same basic thing. I've owned both and there is nothing about Intel that would make me want to go exclusive with them or AMD for that fact aside from price vs what you get from each make.
 
Seriously, nobody believes that ST perf. gained 34% in between generations on the same 14++++++nm process.

But AMD Zen3 gained a ton of ST performance on the same 7nm process from TSMC over Zen2.

Architectural advancements are certainly possible today.
 
X86 architecture is too old. They should have been done long ago and implemented more efficiently architecture but too must software ecosystem is based on it.
x86 is an ISA, not an architecture.
All current microarchitectures supporting x86 translates it into microoperations. There is still no faster generic ISA than x86.
 
as i mentioned several times intel catch-up fast and won't let amd be better... at least amd managed to push intel to offer more for less money which is already good; of-course amd will come with something new so in the end it will be all about prices and preferences...
 
Nice I didn't expect them to hit 5 GHz again. I hope AMD can beat them next gen again and we get some great competition going.
 
Nice I didn't expect them to hit 5 GHz again. I hope AMD can beat them next gen again and we get some great competition going.

why not? its literally all they seemingly can do....heck they dropped 2 cores from the high end and make up for it with high clockspeed...
 
Over the many years i have owned computers i noticed they get hotter and hotter, i too don't care about power consumption but now it's just ridiculous how hot the cup's get and how big the videocards got , when i switched from a 3570k and some GTX Nvidia card i don't remember to ryzen 1800x and rx 480 i was shocked how hot the ACTUAL CASE GOT, i have an old school one all metal with optic drives bays.
This is unacceptable, requirements of 600W to 1000W power supplies and water cooling while apple makes a silicon ( apple M1 ) that consumes 5 to 20W of power, passively cooled in some laptop that can edit even 8k video files.
I am a video editor and i never thought this would be possible but i might ditch Windows for Mac if the trend is water cooling 200W cpu's and paying close to 1000$ for a video card ( this is the price for 3060ti in my country).
I don't understand how apple made that chip so powerful and consume so little power, and that 8k editing from canon R5 is mindblowing, you would need threadripper and rtx 2080 ti to work with those files on PC.
You clearly have a poor understanding of CPU design and software ecosystems though. Apple can largely do what they're doing because they control every aspect of their systems. Their CPUs use custom designed accelerators to enable the video editing features and this require OS awareness at a different level than a more open operating system can provide.
Yes, it's impressive what they've accomplished, but it's unlike we'll ever see anything quite like it using any other OS or at least not one that has support for a much wider hardware ecosystem.
iOS has outperformed Android for years, despite technically inferior hardware, so nothing really new here.
However, Apple's new hardware isn't likely to perform well in a lot of tasks. Luckily for Apple, there are so far no means of testing this, as the platform is too new and there aren't enough benchmarks or even enough software out there to show this.
X86/x64 CPUs are actually quite inefficient at what they do, but they are also capable of doing things other processors can't, due to the way they were designed. There are tradeoffs depending on what your needs are and this is something a lot of people don't seem to quite understand.
Let's see how well Apple's new SoCs will handle new file formats when it comes to video. And old fashioned x64 chip will be able to work with it, albeit, it'll be slow, unless you have a very price, hot, multi-core chip as you pointed out. I bet Apple will tell you to buy their new computer that will support the new file format, as that's how ARM based hardware works, as it needs dedicated co-processors that recognises the new file format to some degree to allow you to use it. This obviously doesn't apply to everything, but very much to video files.

X86 architecture is too old. They should have been done long ago and implemented more efficiently architecture but too must software ecosystem is based on it.
Too old? I don't think you understand the difference here. Please see above.
 
Nice I didn't expect them to hit 5 GHz again. I hope AMD can beat them next gen again and we get some great competition going.

Well... they hit 5Ghz on 14nm++++++ which we know they can do quite easily. Let's see if Alder Lake hits 5Ghz at 10nm.
 
There is still no faster generic ISA than x86
Why? Where is 21st century scientist to invent something that is really new and much better? Do we have to wait for a genius to be born and grow up and hope for his ability to realize some technological magic, a real revolution?
 
I have no doubts that the next big thing from Intel since 2015 can be this good, I just find it amusing that this could have been launched years ago, maybe instead of the 9900K. :D
It never happened, because Intel thought 10 nm was worth waiting for.. :slap:

Also, I don't trust g**kbench either.
tbh the “next big thing” should be Alder Lake, not Rocket Lake.

Aww, geez... I was gonna get this one, but then I read your post and figured it must be some space age paper mache 5950X prop.
Let’s be honest: it almost was a paper launch with skyrocket prices for Zen 3.
I don’t know about new zeland, but here in Europe it is very difficult to find one at a decent price.
 
Why? Where is 21st century scientist to invent something that is really new and much better? Do we have to wait for a genius to be born and grow up and hope for his ability to realize some technological magic, a real revolution?

Why do you think something new and better requires a change in ISA? What if people who invent newer, and better, things don't need to change the ISA at all?

Case in point: AMD Zen 3, Intel Skylake, Intel Atom, PS4, PS5, XBox One X, and XBox Series X all share the same ISA. They all have different levels of performance: with the newer chips performing better and better. Even with the same ISA, CPU-engineers are able to make better CPUs.

-----

And before someone mentions decoder width... sure... I admit that might be a problem. But that's "might be a problem", and not "proven to be a problem" yet. POWER9 pushed 6-uops / clock tick and still lost to Skylake's 4-uops/clock tick in typical code. Apple M1 is 8-uops/clock tick, so that's what starts to bring up the decoder width issue again.
 
Why? Where is 21st century scientist to invent something that is really new and much better? Do we have to wait for a genius to be born and grow up and hope for his ability to realize some technological magic, a real revolution?
I would easily call the last thirty years just that with several thousand geniuses worth of talent to get here, the fact you have little respect for the complexities of arch design For mass manufacturing and compliant to stringent standard's and specifications to guarantee support for all your software ,all while evolving to newer technology is typical.
 
That’s right, we should check leaks like this https://wccftech.com/intel-core-i9-11900-i7-11700k-i7-11700-8-core-rocket-lake-desktop-cpus-leak/ , which includes CPUz, cinebench r20 and r23.
Yet more benchmarks with little to no relevance for real workloads. :)

Why? Where is 21st century scientist to invent something that is really new and much better? Do we have to wait for a genius to be born and grow up and hope for his ability to realize some technological magic, a real revolution?
That sounds like a solution in search of a problem, which is the way many engineers think.
Decades have passed, nothing yet have proven to be more versatile and performant than x86, even though many have tried to replace it, like Itanium.
The same happens in the programming world too; there is still nothing that can match good old C, yet people try to overengineer replacements… (cough)Rust(ahem) :rolleyes:

Virtually all code will scale towards cache misses and branch predictions, unless it relies heavily on SIMD. No ISA that I'm aware of have been able to solve this so far.

And before someone mentions decoder width... sure... I admit that might be a problem. But that's "might be a problem", and not "proven to be a problem" yet. POWER9 pushed 6-uops / clock tick and still lost to Skylake's 4-uops/clock tick in typical code. Apple M1 is 8-uops/clock tick, so that's what starts to bring up the decoder width issue again.
I have no issue with the possibility of increasing the decoder width or even adding more execution ports. But I question how likely it is, if Cypress Cove is basically a backport of Sunny Cove, since these kinds of changes usually require a total overhaul of the cache, register files and everything on the front-end.

Wouldn't it be more likely to backport something from the execution side from e.g. Sapphire Rapids, or to simply add more execution units on existing execution ports? (like one extra MUL unit?)
 
mark my word , Intel won't sell cheaper.AMd will not drop price.both AMD/Intel want more money.just like Nvidia/AMD GPU
 
I wonder how many new security flaws this generation will have. :roll:

With any luck, far less. They'd be fools not to avoid known flaws in the redesign.

It tops out at 8C.. even if it's a threat to Ryzen, which I doubt, AMD could just drop the prices and call it a day.

8C is all I need honestly. Two questions remaining are efficiency and how well the SMT works. AMD has a big edge in both.
 
Back
Top