Monday, December 28th 2020

Intel Core i7-11700K "Rocket Lake" CPU Outperforms AMD Ryzen 9 5950X in Single-Core Tests

Intel's Rocket Lake-S platform is scheduled to arrive at the beginning of the following year, which is just a few days away. The Rocket Lake lineup of processors is going to be Intel's 11th generation of Core desktop CPUs and the platform is expected to make a debut with Intel's newest Cypress Cove core design. Thanks to the Geekbench 5 submission, we have the latest information about the performance of the upcoming Intel Core i7-11700K 8C/16T processor. Based on the Cypress Cove core, the CPU is allegedly bringing a double-digit IPC increase, according to Intel.

In the single-core result, the CPU has managed to score 1807 points, while the multi-core score is 10673 points. The CPU ran at the base clock of 3.6 GHz, while the boost frequency is fixed at 5.0 GHz. Compared to the previous, 10th generation, Intel Core i7-10700K which scores 1349 single-core score and 8973 points multi-core score, the Rocket Lake CPU has managed to put out 34% higher single-core and 19% higher multi-core score. When it comes to the comparison to AMD offerings, the highest-end Ryzen 9 5950X is about 7.5% slower in single-core result, and of course much faster in multi-core result thanks to double the number of cores.
Sources: Leakbench, via VideoCardz
Add your own comment

114 Comments on Intel Core i7-11700K "Rocket Lake" CPU Outperforms AMD Ryzen 9 5950X in Single-Core Tests

#26
piloponth
Seriously, nobody believes that ST perf. gained 34% in between generations on the same 14++++++nm process.
Posted on Reply
#27
thesmokingman
piloponth
Seriously, nobody believes that ST perf. gained 34% in between generation on the same 14++++++nm process.
Yea lol, more Intel math!
Posted on Reply
#28
Luminescent
jonup
You must be one of these people who ask me about the gas mileage on my cars. :roll: Wild guess: I don't care. :nutkick:
I recently built 10850k, no OC just power limits removed, 320W+. I would have put just as beefy cooling on a Ryzen CPU if I wanted to get the best out of it. And that would be pulling well over 125W rated TDP.

P.s. The owner also didn't care about his electric bill. surprise! surprise!
Over the many years i have owned computers i noticed they get hotter and hotter, i too don't care about power consumption but now it's just ridiculous how hot the cup's get and how big the videocards got , when i switched from a 3570k and some GTX Nvidia card i don't remember to ryzen 1800x and rx 480 i was shocked how hot the ACTUAL CASE GOT, i have an old school one all metal with optic drives bays.
This is unacceptable, requirements of 600W to 1000W power supplies and water cooling while apple makes a silicon ( apple M1 ) that consumes 5 to 20W of power, passively cooled in some laptop that can edit even 8k video files.
I am a video editor and i never thought this would be possible but i might ditch Windows for Mac if the trend is water cooling 200W cpu's and paying close to 1000$ for a video card ( this is the price for 3060ti in my country).
I don't understand how apple made that chip so powerful and consume so little power, and that 8k editing from canon R5 is mindblowing, you would need threadripper and rtx 2080 ti to work with those files on PC.
Posted on Reply
#29
yotano211
jonup
You must be one of these people who ask me about the gas mileage on my cars. :roll: Wild guess: I don't care. :nutkick:
I recently built 10850k, no OC just power limits removed, 320W+. I would have put just as beefy cooling on a Ryzen CPU if I wanted to get the best out of it. And that would be pulling well over 125W rated TDP.

P.s. The owner also didn't care about his electric bill. surprise! surprise!
I drive a tractor-trailer that gets 5-7mpg, my car is a Prius that gets 49mpg, and my boat is a sailboat that I've gotten close to 250mpg on diesel last summer.
1st one is my job, 2nd personal car, 3rd is my boat.
Posted on Reply
#30
TumbleGeorge
Luminescent
I don't understand how apple made that chip so powerful and consume so little power, and that 8k editing from canon R5 is mindblowing, you would need threadripper and rtx 2080 ti to work with those files on PC.
X86 architecture is too old. They should have been done long ago and implemented more efficiently architecture but too must software ecosystem is based on it.
Posted on Reply
#31
Bones
Who needs the KFConsole when you've got one of these to roast your dinner tonight?

I'll pass thank you very much.
My wallet isn't into getting raped by Intel every chance they get to for a piece that, in the end still does the same basic thing. I've owned both and there is nothing about Intel that would make me want to go exclusive with them or AMD for that fact aside from price vs what you get from each make.
Posted on Reply
#32
dragontamer5788
piloponth
Seriously, nobody believes that ST perf. gained 34% in between generations on the same 14++++++nm process.
But AMD Zen3 gained a ton of ST performance on the same 7nm process from TSMC over Zen2.

Architectural advancements are certainly possible today.
Posted on Reply
#33
efikkan
TumbleGeorge
X86 architecture is too old. They should have been done long ago and implemented more efficiently architecture but too must software ecosystem is based on it.
x86 is an ISA, not an architecture.
All current microarchitectures supporting x86 translates it into microoperations. There is still no faster generic ISA than x86.
Posted on Reply
#34
laszlo
as i mentioned several times intel catch-up fast and won't let amd be better... at least amd managed to push intel to offer more for less money which is already good; of-course amd will come with something new so in the end it will be all about prices and preferences...
Posted on Reply
#35
FinneousPJ
Nice I didn't expect them to hit 5 GHz again. I hope AMD can beat them next gen again and we get some great competition going.
Posted on Reply
#36
ZoneDymo
FinneousPJ
Nice I didn't expect them to hit 5 GHz again. I hope AMD can beat them next gen again and we get some great competition going.
why not? its literally all they seemingly can do....heck they dropped 2 cores from the high end and make up for it with high clockspeed...
Posted on Reply
#37
TheLostSwede
Luminescent
Over the many years i have owned computers i noticed they get hotter and hotter, i too don't care about power consumption but now it's just ridiculous how hot the cup's get and how big the videocards got , when i switched from a 3570k and some GTX Nvidia card i don't remember to ryzen 1800x and rx 480 i was shocked how hot the ACTUAL CASE GOT, i have an old school one all metal with optic drives bays.
This is unacceptable, requirements of 600W to 1000W power supplies and water cooling while apple makes a silicon ( apple M1 ) that consumes 5 to 20W of power, passively cooled in some laptop that can edit even 8k video files.
I am a video editor and i never thought this would be possible but i might ditch Windows for Mac if the trend is water cooling 200W cpu's and paying close to 1000$ for a video card ( this is the price for 3060ti in my country).
I don't understand how apple made that chip so powerful and consume so little power, and that 8k editing from canon R5 is mindblowing, you would need threadripper and rtx 2080 ti to work with those files on PC.
You clearly have a poor understanding of CPU design and software ecosystems though. Apple can largely do what they're doing because they control every aspect of their systems. Their CPUs use custom designed accelerators to enable the video editing features and this require OS awareness at a different level than a more open operating system can provide.
Yes, it's impressive what they've accomplished, but it's unlike we'll ever see anything quite like it using any other OS or at least not one that has support for a much wider hardware ecosystem.
iOS has outperformed Android for years, despite technically inferior hardware, so nothing really new here.
However, Apple's new hardware isn't likely to perform well in a lot of tasks. Luckily for Apple, there are so far no means of testing this, as the platform is too new and there aren't enough benchmarks or even enough software out there to show this.
X86/x64 CPUs are actually quite inefficient at what they do, but they are also capable of doing things other processors can't, due to the way they were designed. There are tradeoffs depending on what your needs are and this is something a lot of people don't seem to quite understand.
Let's see how well Apple's new SoCs will handle new file formats when it comes to video. And old fashioned x64 chip will be able to work with it, albeit, it'll be slow, unless you have a very price, hot, multi-core chip as you pointed out. I bet Apple will tell you to buy their new computer that will support the new file format, as that's how ARM based hardware works, as it needs dedicated co-processors that recognises the new file format to some degree to allow you to use it. This obviously doesn't apply to everything, but very much to video files.
TumbleGeorge
X86 architecture is too old. They should have been done long ago and implemented more efficiently architecture but too must software ecosystem is based on it.
Too old? I don't think you understand the difference here. Please see above.
Posted on Reply
#39
phanbuey
FinneousPJ
Nice I didn't expect them to hit 5 GHz again. I hope AMD can beat them next gen again and we get some great competition going.
Well... they hit 5Ghz on 14nm++++++ which we know they can do quite easily. Let's see if Alder Lake hits 5Ghz at 10nm.
Posted on Reply
#40
TumbleGeorge
efikkan
There is still no faster generic ISA than x86
Why? Where is 21st century scientist to invent something that is really new and much better? Do we have to wait for a genius to be born and grow up and hope for his ability to realize some technological magic, a real revolution?
Posted on Reply
#41
Fierce Guppy
bluetriangleclock
There goes AMD's brief lead in gaming. :roll:

But it was never a real lead since the Ryzen 5000 launch was a paper launch.
Aww, geez... I was gonna get this one, but then I read your post and figured it must be some space age paper mache 5950X prop.
Posted on Reply
#42
Max(IT)
Mats
I have no doubts that the next big thing from Intel since 2015 can be this good, I just find it amusing that this could have been launched years ago, maybe instead of the 9900K. :D
It never happened, because Intel thought 10 nm was worth waiting for.. :slap:

Also, I don't trust g**kbench either.
tbh the “next big thing” should be Alder Lake, not Rocket Lake.
Fierce Guppy
Aww, geez... I was gonna get this one, but then I read your post and figured it must be some space age paper mache 5950X prop.
Let’s be honest: it almost was a paper launch with skyrocket prices for Zen 3.
I don’t know about new zeland, but here in Europe it is very difficult to find one at a decent price.
Posted on Reply
#43
dragontamer5788
TumbleGeorge
Why? Where is 21st century scientist to invent something that is really new and much better? Do we have to wait for a genius to be born and grow up and hope for his ability to realize some technological magic, a real revolution?
Why do you think something new and better requires a change in ISA? What if people who invent newer, and better, things don't need to change the ISA at all?

Case in point: AMD Zen 3, Intel Skylake, Intel Atom, PS4, PS5, XBox One X, and XBox Series X all share the same ISA. They all have different levels of performance: with the newer chips performing better and better. Even with the same ISA, CPU-engineers are able to make better CPUs.

-----

And before someone mentions decoder width... sure... I admit that might be a problem. But that's "might be a problem", and not "proven to be a problem" yet. POWER9 pushed 6-uops / clock tick and still lost to Skylake's 4-uops/clock tick in typical code. Apple M1 is 8-uops/clock tick, so that's what starts to bring up the decoder width issue again.
Posted on Reply
#44
theoneandonlymrk
TumbleGeorge
Why? Where is 21st century scientist to invent something that is really new and much better? Do we have to wait for a genius to be born and grow up and hope for his ability to realize some technological magic, a real revolution?
I would easily call the last thirty years just that with several thousand geniuses worth of talent to get here, the fact you have little respect for the complexities of arch design For mass manufacturing and compliant to stringent standard's and specifications to guarantee support for all your software ,all while evolving to newer technology is typical.
Posted on Reply
#45
efikkan
docnorth
That’s right, we should check leaks like this wccftech.com/intel-core-i9-11900-i7-11700k-i7-11700-8-core-rocket-lake-desktop-cpus-leak/ , which includes CPUz, cinebench r20 and r23.
Yet more benchmarks with little to no relevance for real workloads. :)
TumbleGeorge
Why? Where is 21st century scientist to invent something that is really new and much better? Do we have to wait for a genius to be born and grow up and hope for his ability to realize some technological magic, a real revolution?
That sounds like a solution in search of a problem, which is the way many engineers think.
Decades have passed, nothing yet have proven to be more versatile and performant than x86, even though many have tried to replace it, like Itanium.
The same happens in the programming world too; there is still nothing that can match good old C, yet people try to overengineer replacements… (cough)Rust(ahem) :rolleyes:

Virtually all code will scale towards cache misses and branch predictions, unless it relies heavily on SIMD. No ISA that I'm aware of have been able to solve this so far.
dragontamer5788
And before someone mentions decoder width... sure... I admit that might be a problem. But that's "might be a problem", and not "proven to be a problem" yet. POWER9 pushed 6-uops / clock tick and still lost to Skylake's 4-uops/clock tick in typical code. Apple M1 is 8-uops/clock tick, so that's what starts to bring up the decoder width issue again.
I have no issue with the possibility of increasing the decoder width or even adding more execution ports. But I question how likely it is, if Cypress Cove is basically a backport of Sunny Cove, since these kinds of changes usually require a total overhaul of the cache, register files and everything on the front-end.

Wouldn't it be more likely to backport something from the execution side from e.g. Sapphire Rapids, or to simply add more execution units on existing execution ports? (like one extra MUL unit?)
Posted on Reply
#46
Xuper
mark my word , Intel won't sell cheaper.AMd will not drop price.both AMD/Intel want more money.just like Nvidia/AMD GPU
Posted on Reply
#47
Lionheart
dicktracy
Insecure fanboys everywhere. As if this is suprising since it’s no longer Skylake anymore.
So yourself then.
Posted on Reply
#48
R-T-B
Legacy-ZA
I wonder how many new security flaws this generation will have. :roll:
With any luck, far less. They'd be fools not to avoid known flaws in the redesign.
Mats
It tops out at 8C.. even if it's a threat to Ryzen, which I doubt, AMD could just drop the prices and call it a day.
8C is all I need honestly. Two questions remaining are efficiency and how well the SMT works. AMD has a big edge in both.
Posted on Reply
#49
AnarchoPrimitiv
Here comes the Ryzen 5000 XT series on 7nm EUV (improved node)...willing to bet on it
Posted on Reply
#50
dragontamer5788
efikkan
I have no issue with the possibility of increasing the decoder width or even adding more execution ports. But I question how likely it is, if Cypress Cove is basically a backport of Sunny Cove, since these kinds of changes usually require a total overhaul of the cache, register files and everything on the front-end.

Wouldn't it be more likely to backport something from the execution side from e.g. Sapphire Rapids, or to simply add more execution units on existing execution ports? (like one extra MUL unit?)
I think ARM has an advantage on decoder width. That's the only weak point of the x86 ISA I can think of.

x86 requires a byte-by-byte decoder, because you have 2-byte, 3-byte, 4-byte... 15-byte instructions (some of which are macro-op fused and/or micro-op split). ARM standardized upon 4-byte instructions with an occasional 8-byte macro-op fused. That means if you want to perform 4-wide decoding (and assume an average of 4-bytes per instruction), you need 64-parallel decoders: one for every byte (byte0, byte1, byte2) of the cache line.

ARM on the other hand is always 4-bytes or 8-bytes at a time (in the case of macro-op fused operations). Which means for a 64-byte decoder, ARM only need 16-parallel decoders: knowing there's no 2-byte or 3-byte instructions that could be "in-between". Just hypothetically speaking of course, I dunno really how these things are organized.

Anyway: Apple M1 shot a broadside at the x86 camp with their 8-wide decoder. I do think its relevant to bring up. However, ARM Neoverse is still only 4-wide decoding. It hasn't really been proven yet that an ultra-wide decoder (like Apple's M1) is really the best path forward.
Posted on Reply
Add your own comment