Monday, December 28th 2020

Intel Core i7-11700K "Rocket Lake" CPU Outperforms AMD Ryzen 9 5950X in Single-Core Tests

Intel's Rocket Lake-S platform is scheduled to arrive at the beginning of the following year, which is just a few days away. The Rocket Lake lineup of processors is going to be Intel's 11th generation of Core desktop CPUs and the platform is expected to make a debut with Intel's newest Cypress Cove core design. Thanks to the Geekbench 5 submission, we have the latest information about the performance of the upcoming Intel Core i7-11700K 8C/16T processor. Based on the Cypress Cove core, the CPU is allegedly bringing a double-digit IPC increase, according to Intel.

In the single-core result, the CPU has managed to score 1807 points, while the multi-core score is 10673 points. The CPU ran at the base clock of 3.6 GHz, while the boost frequency is fixed at 5.0 GHz. Compared to the previous, 10th generation, Intel Core i7-10700K which scores 1349 single-core score and 8973 points multi-core score, the Rocket Lake CPU has managed to put out 34% higher single-core and 19% higher multi-core score. When it comes to the comparison to AMD offerings, the highest-end Ryzen 9 5950X is about 7.5% slower in single-core result, and of course much faster in multi-core result thanks to double the number of cores.
Sources: Leakbench, via VideoCardz
Add your own comment

114 Comments on Intel Core i7-11700K "Rocket Lake" CPU Outperforms AMD Ryzen 9 5950X in Single-Core Tests

#76
xenocide
Crackong
I thought Intel uses "Realworld benchmark"

Is Geekbench a "Realworld benchmark" now ? :)
People went nuts over the M1 benchmarks on Geekbench for literally 3 months. Only seems fair.
Posted on Reply
#77
TumbleGeorge
Steevo
X86-64 has the advantage of is you want to do X in the future software and brute force will do it, ARM designs..... You need to buy a whole new device.
Brute force vs intelligence. X86 has no brain Apple has brain. exist midway between X86 and Apple? Intelligent force?
Posted on Reply
#78
1d10t
Good for Intel, I'm just curious how they price these chips :D
Posted on Reply
#79
TumbleGeorge
Has rumors about ZEN4 for AM5 delay to Q3 2022? Why? To start together with RDNA3 cards? If that is true Intel has chance to compete Meteor lake against ZEN4 series. I hope for battle with many big cores vs many big cores!
Posted on Reply
#80
PanicLake
efikkan
x86 is an ISA, not an architecture.
I'm wondering what the A in ISA stands for... :roll:
Posted on Reply
#81
zlobby
Legacy-ZA
I wonder how many new security flaws this generation will have. :roll:
All of them! :D
Max(IT)
yep... I was speaking about not being in Europe in a commercial way, not geographically :D
The Colonies are rowdy these days, eh mate?
Posted on Reply
#82
Mats
TumbleGeorge
To start together with RDNA3 cards?
Why?
I don't believe that for a second. Look what happened when RDNA2, Vermeer, XBOX and PS all launched within a few months, all coming from TSMC's production.
You think AMD wants to repeat that?

Some illiterate people even thinks those launches were paper launches, which is just hilarious and sad at the same time. It doesn't work like that. ;)

2022 sounds right tho, and maybe even Q3, although I'd guess a bit earlier in the year.
Posted on Reply
#83
Jism
Yes, and it proberly takes 255W of power to accomplish beating AMD, while AMD on the other hand does'nt go beyond 144W with PBO enabled and proberly alot more cores too.
Posted on Reply
#84
efikkan
dragontamer5788
I think ARM has an advantage on decoder width. That's the only weak point of the x86 ISA I can think of.
x86 requires a byte-by-byte decoder, because you have 2-byte, 3-byte, 4-byte... 15-byte instructions (some of which are macro-op fused and/or micro-op split). ARM standardized upon 4-byte instructions with an occasional 8-byte macro-op fused…
Yes, it's certainly an advantage for implementing ARM, but I don't think it's such a big deal. And even with the variable instruction word width, an x86 implementation still doesn't need that many logic gates to determine the instruction length, and then pipeline even more than four decoders if needed.

I really don't think decoding width is the bottle neck for x86, well at least not now. But stay tuned for Sapphire Rapids(Golden Cove), where at least the front-end is "significantly larger" than Sunny Cove, including an 600 instruction OoO window. I expect this may be to feed more execution ports, but time will tell.
dragontamer5788
Apple has a superior decoder: just 8-uops/tick no matter what. Its the "more expensive transistor budget" compared to a uop cache. Apple can achieve 8uops/tick across the entire 192kB L1 instruction cache, while Intel Skylake / AMD Zen3 can only achieve 4-uops/tick across a 48kB L1 cache (Skylake) / 32kB L1 (Zen3) cache, and a 6-uop/tick across a smaller region inside of the uOp cache.
A little side note; don't forget ARM requires more instructions to do the same work, so it's not an apples to apples comparison. (no pun intended)
TumbleGeorge
LoL I see that X86 is ok with better decoder. But isn't possible to make better decoder because has depencies how work ISA with information. This is same as ISA X86 is not ok itself.
I think you got this all wrong, a bigger decoder doesn't require any ISA changes.
Just wait for Sapphire Rapids, and you'll see a much more sophisticated x86 based microarchitecture.
Posted on Reply
#85
danwat1234
Not a fair comparison! Intel's Rocket lake instruction set has AVX-512, ZEN 3 does not. Geekbench 5 takes advantage of this.
Posted on Reply
#86
thesmokingman
danwat1234
Not a fair comparison! Intel's Rocket lake instruction set has AVX-512, ZEN 3 does not. Geekbench 5 takes advantage of this.
Hey now, let's not look at things accurately!

lol, looks like they are doing another bone headed buyback.

lmao, all those buys and the stock dropped down again.
Posted on Reply
#87
docnorth
efikkan
Yet more benchmarks with little to no relevance for real workloads. :)
1)That was a reply with a scent of sarcasm. No AMD fan can reject cinebench. 2)Almost every CPU benchmark is useless when isolated. Many benchmarks combined with browsing speed, office work, gaming etc can show the whole potential of a CPU. That's why TPU is my first choice for reviews.
Posted on Reply
#88
Mats
docnorth
Almost every CPU benchmark is useless when isolated.
Geekbench is way worse than a lot of individual benchmarks. CB does at least tell us something about rendering performance (even in programs other than C4D).
Geekbench tells us nothing, it's inconsistent with a lot of other benchmarks, and the choice of OS makes way too much difference.

Still, every CPU leak is Geekbench, and still I don't know why. Well unless it's all about accessibility.
Posted on Reply
#89
dragontamer5788
Mats
Geekbench is way worse than a lot of individual benchmarks. CB does at least tell us something about rendering performance (even in programs other than C4D).
Geekbench tells us nothing, it's inconsistent with a lot of other benchmarks, and the choice of OS makes way too much difference.

Still, every CPU leak is Geekbench, and still I don't know why. Well unless it's all about accessibility.
Geekbench is basically aimed at web-like workloads. DOM traversal (aka HTML), AES-encryption / decryption (aka: HTTPS performance), Javascript parsing, etc. etc. Its pretty well documented actually...
Posted on Reply
#90
Mats
dragontamer5788
Geekbench is basically aimed at web-like workloads. DOM traversal (aka HTML), AES-encryption / decryption (aka: HTTPS performance), Javascript parsing, etc. etc. Its pretty well documented actually...
Sure, but is that what HW enthusiasts are most interested in when it comes to leaked info about upcoming CPU's? Before rendering or gaming performance for instance?
Posted on Reply
#91
dragontamer5788
Mats
Sure, but is that what HW enthusiasts are most interested in when it comes to leaked info about upcoming CPU's? Before rendering or gaming performance for instance?
That their pet CPU performs better than others and that they should get into flamewars with everyone who disagrees with them?

I mean, if you're going to pose a question, you aren't necessarily going to get an answer you like :-)
Posted on Reply
#92
Mats
dragontamer5788
That their pet CPU performs better than others and that they should get into flamewars with everyone who disagrees with them?

I mean, if you're going to pose a question, you aren't necessarily going to get an answer you like :)
Am I supposed to stitch together a conspiracy theory here, about about how various HW sites pay Geekbench to make a benchmark that's as least useful for f**boys as possible, in order to calm down forums? :roll:

AMD users may like CBench, while Intel users may prefer gaming benchmarks, but my question is, who does Geekbench cater to among HW enthusiasts?
Do we really find a substantial amount of those people in HW forums, for instance? I don't think so.
Posted on Reply
#93
dragontamer5788
Mats
AMD users may like CBench, while Intel users may prefer gaming benchmarks, but my question is, who does Geekbench cater to among HW enthusiasts?
Do we really find a substantial amount of those people in HW forums, for instance? I don't think so.
Geekbench caters towards web users. Which happens to be everyone reading this forum.

Each click you make on this website kicks off HTTPS AES Decryption, followed by HTML + Javascript Parsing. Every post you have made is inside of a Javascript WYSIWYG GUI, parsed into an HTML form, packaged into an HTTPS Encrypted message and piped to the server. The very stuff that composes the Geekbench suite.

I said that before, but maybe if I say it again with more explicit examples, you'll get what I'm trying to say.
Posted on Reply
#94
Makaveli
Mats
Am I supposed to stitch together a conspiracy theory here, about about how various HW sites pay Geekbench to make a benchmark that's as least useful for f**boys as possible, in order to calm down forums? :roll:

AMD users may like CBench, while Intel users may prefer gaming benchmarks, but my question is, who does Geekbench cater to among HW enthusiasts?
Do we really find a substantial amount of those people in HW forums, for instance? I don't think so.
in the grand scheme of things geekbench scores mean nothing.

Its the only number they seem to have do what I do ignore them and wait for a production review. Which won't be until the March 2021 time frame.
Posted on Reply
#95
Vya Domus
dragontamer5788
Geekbench caters towards web users. Which happens to be everyone reading this forum.

Each click you make on this website kicks off HTTPS AES Decryption, followed by HTML + Javascript Parsing. Every post you have made is inside of a Javascript WYSIWYG GUI, parsed into an HTML form, packaged into an HTTPS Encrypted message and piped to the server. The very stuff that composes the Geekbench suite.
The question is will 6 Gb/s worth of AES encryption performance gonna matter as opposed to just 5 Gb/s ?

I don’t think so.
Mats
Am I supposed to stitch together a conspiracy theory here, about about how various HW sites pay Geekbench to make a benchmark that's as least useful for f**boys as possible, in order to calm down forums? :roll:
Nah, what happened is GB was always used in the context of iOS vs Android because the benchmark would always get updated to make Apple chips look better whenever a new one was released. Then what happened was that people noticed that those absurd numbers where becoming comparable with desktop chips so now it’s used everywhere.
Posted on Reply
#96
Mats
dragontamer5788
Geekbench caters towards web users. Which happens to be everyone reading this forum.

Each click you make on this website kicks off HTTPS AES Decryption, followed by HTML + Javascript Parsing. Every post you have made is inside of a Javascript WYSIWYG GUI, parsed into an HTML form, packaged into an HTTPS Encrypted message and piped to the server. The very stuff that composes the Geekbench suite.
That goes without saying, but what does that have to do with measuring performance for those who doesn't care about that, ie most people here, presumably?

All over the net you'll find "Let's build a render box!", or "Ultimate ITX gaming" kind of guides, as a consequence of lots of people use Blender, etc, or play AAA games.
How many build a "Fastest HTTPS AES Decryption rig"? I'm not saying they don't exist, just that they're small minority, most likely.

Back to square one, if GBench shows web performance, and most people wants to know other kind of performance, how come GB is still the first one to pop up in leaks??

It's a mismatch, to say the least.
That's without touching the aspect of benchmark quality. There are more suitable benchmarks, but they never show up in leaks.
Vya Domus
The question is will 6 Gb/s worth of AES encryption performance gonna matter as opposed to just 5 Gb/s ?

I don’t think so.
Exactly. It doesn't define the next build, far from it. (Admittedly, no individual benchmark does, but you get my point.)
Vya Domus
Nah, what happened is GB was always used in the context of iOS vs Android because the benchmark would always get updated to make Apple chips look better whenever a new one was released. Then what happened was that people noticed that those absurd numbers where becoming comparable with desktop chips so now it’s used everywhere.
Yup. Just look at Ryzen 5000 Hacintosh machines, way ahead of W10 counterparts. It's supposed to be OS agnostic tho.. :D
Posted on Reply
#97
FinneousPJ
Mats
That goes without saying, but what does that have to do with measuring performance for those who doesn't care about that, ie most people here, presumably?

All over the net you'll find "Let's build a render box!", or "Ultimate ITX gaming" kind of guides, as a consequence of lots of people use Blender, etc, or play AAA games.
How many build a "Fastest HTTPS AES Decryption rig"? I'm not saying they don't exist, just that they're small minority, most likely.

Back to square one, if GBench shows web performance, and most people wants to know other kind of performance, how come GB is still the first one to pop up in leaks??

It's a mismatch, to say the least.
That's without touching the aspect of benchmark quality. There are more suitable benchmarks, but they never show up in leaks.


Exactly. It doesn't define the next build, far from it. (Admittedly, no individual benchmark does, but you get my point.)


Yup. Just look at Ryzen 5000 Hacintosh machines, way ahead of W10 counterparts. It's supposed to be OS agnostic tho.. :D
AES en/decryption is very relevant, but no on the client side (mainstream/desktop), but on the server side. So I would agree it's not a relevant benchmark for these mainstream parts.
Posted on Reply
#98
dragontamer5788
Mats
That goes without saying, but what does that have to do with measuring performance for those who doesn't care about that, ie most people here, presumably?

All over the net you'll find "Let's build a render box!", or "Ultimate ITX gaming" kind of guides, as a consequence of lots of people use Blender, etc, or play AAA games.
How many build a "Fastest HTTPS AES Decryption rig"? I'm not saying they don't exist, just that they're small minority, most likely.

Back to square one, if GBench shows web performance, and most people wants to know other kind of performance, how come GB is still the first one to pop up in leaks??

It's a mismatch, to say the least.
That's without touching the aspect of benchmark quality. There are more suitable benchmarks, but they never show up in leaks.
Geekbench5 assigns 5% to the Cryptography score (AES benchmark), which seems reasonable to me. Do you think it should be greater, or less than 5%?



The integer workloads are 65% of Geekbench.



Which consists of compression, HTML5, PDF rendering, and other such common tasks.

CLang is probably not so common, but probably is representative of Javascript. SQLite is in a bunch of random stuff, so its probably a good benchmark today.

---------

Raytracing, Machine Learning, etc. etc. are Vectorized-based tasks, taking 35% of the weight. The vectorized / floating point tasks are clearly aimed at the scientific community, I guess the "hardcore" benchmarks you're interested in. But I would argue that those tasks are quite uncommon for a typical computer user today.
Posted on Reply
#99
RandallFlagg
No one said anything about AES when Zen 3 was far faster than Comet Lake. Seems like a tremendous level of hypocrisy.

Look at your URL bar, every place it says https and not http (with attendant insecure site browser warnings), you are using AES. Encrypted files, filesystems, and so on use AES. 5% weight seems like a reasonable number for a client system.

Ivy Bridge and Haswell made huge improvements on AES. Haswell was about 4X faster at AES vs Sandy Bridge and about 60% faster than Ivy. The difference in that case is palpable while using a browser on modern AES encrypted websites, especially with multiple encrypted tabs/connections. I will admit there is not as much of a user feel difference after that, beyond a point a user doesn't feel the difference, but AES is most definitely a thing that affects the user experience - and you can feel it if you swap between SB and Haswell boxes.
Posted on Reply
#100
dragontamer5788
RandallFlagg
No one said anything about AES when Zen 3 was far faster than Comet Lake. Seems like a tremendous level of hypocrisy.
Not only that, but AMD pushed Zen3 into AES-XTS first.

AMD Zen 1 started the trend, pushing 2-AES pipelines per clock tick, doubling its AES-performance over Intel. Intel pushed back with their own 2x AES pipeline design, then AMD Zen3 allowed a 2nd doubling, allowing their 256-bit vectors to perform 2xAES per pipeline per clocktick (or now 4x 128-bit AES calculations per clocktick). Intel is pushing back by making AVX512, or 4x AES on 512-bit registers to be pushed.

If anything, these CPU-manufacturers prove how important AES is to today's workloads, be it server or client. Apple has extremely fast AES units, AMD has extremely fast AES units, Intel has extremely fast AES units. Everyone's optimizing AES here. This isn't an Intel or Geekbench thing: literally the entire industry is pushing faster-and-faster AES performance.

Youtube? Delivered by HTTPS. Netflix? Each 30-minute to 60-minute episode at 1080p is 4GB to 10GB. All of which has to be AES decrypted as you're watching. Everything you're doing on the internet is HTTPS encrypted and goes through that AES core before any further processing (decoding, or rendering) can be done. Its a non-trivial bottleneck in all web-based applications.

Seeing 5% weight to it is fine.
Posted on Reply
Add your own comment