Friday, May 22nd 2020

Intel Rocket Lake CPU Appears with 6 Cores and 12 Threads

We have been hearing a lot about Intel's Rocket Lake lineup of processors. They are supposed to be a backport of Willow Cove 10 nm core, adapted to work on a 14 nm process for better yielding. Meant to launch sometime around late 2020 or the beginning of 2021, Rocket Lake is designed to work on the now existing LGA1200 socket motherboards, which were launched just a few days ago along with Intel Comet Lake CPUs. Rocket Lake is there to supply the desktop segment and satisfy user demand, in light of lacking 10 nm offers for desktop users. The 10 nm node is going to present only on mobile/laptop and server solutions before it comes to the desktop.

In the latest report on 3D Mark, the hardware leaker TUM APISAK has found a Rocket Lake CPU running the benchmark and we get to see first specifications of the Rocket Lake-S platform. The benchmark ran on 6 core model with 12 threads, that had a base clock of 3,5 GHz. The CPU managed to boost up to 4,09 GHz, however, we are sure that these are not final clocks and the actual product should have even higher frequencies. Paired with Gen12 Xe graphics, the Rocket Lake platform could offer a very nice alternative to AMD offerings if the backport of Willow Cove goes well. Even though it is still using a 14 nm node, performance would be good. The only things that would be sacrificed (from backporting) are die space and efficiency/heat.
Intel Rocket Lake Benchmark Report
Source: @TUM_APISAK (Twitter)
Add your own comment

38 Comments on Intel Rocket Lake CPU Appears with 6 Cores and 12 Threads

#1
dj-electric
Willow Cove may be backported from 14 but... Inside.... things will be built...

Differently.
Posted on Reply
#2
lexluthermiester
dj-electric
Willow Cove may be backported from 14 but... Inside.... things will be built...

Differently.
How do you know?
Posted on Reply
#3
InVasMani
Always a good sign when you gotta backport your design to a older node because the new one sucks that badly.
Posted on Reply
#4
watzupken
I feel this is what Intel should have done earlier, instead of continuously trying to fix their 10nm year after year. The exisiting Skylake architecture have been around for 5 generations (Intel 6, 7, 8, 9, and 10), while 14nm was introduced in 2014 with Broadwell. Either they got overconfident or over complacent thinking that their competitors will not catch up with them in terms of fab or CPU architecture, or both. Now they are paying the price. While AMD and ARM have not done huge damage to them, they are slowly but surely chipping decent chunks away from them.
InVasMani
Always a good sign when you gotta backport your design to a older node because the new one sucks that badly.
Or the old one got optimized to the point that it is better in performance, though not so in efficiency. I doubt 10nm at this stage can hit the kind of clockspeed to deliver better performance over the older 14nm. If clockspeed is not that important, i.e. laptops and enterprise/servers, then 10nm makes sense. Currently these are the cases where Intel is implementing/ planning to implement 10nm CPUs.
Posted on Reply
#5
dj-electric
lexluthermiester
How do you know?
How did i know many things before :)
Posted on Reply
#6
AnarchoPrimitiv
dj-electric
How did i know many things before :)
I could be wrong, but according to the information presented here, doesn't this indicate that the ipc gain is basically nothing if they're purely depending on clocks to make up the performance difference?
Posted on Reply
#7
InVasMani
watzupken
I feel this is what Intel should have done earlier, instead of continuously trying to fix their 10nm year after year. The exisiting Skylake architecture have been around for 5 generations (Intel 6, 7, 8, 9, and 10), while 14nm was introduced in 2014 with Broadwell. Either they got overconfident or over complacent thinking that their competitors will not catch up with them in terms of fab or CPU architecture, or both. Now they are paying the price. While AMD and ARM have not done huge damage to them, they are slowly but surely chipping decent chunks away from them.


Or the old one got optimized to the point that it is better in performance, though not so in efficiency. I doubt 10nm at this stage can hit the kind of clockspeed to deliver better performance over the older 14nm. If clockspeed is not that important, i.e. laptops and enterprise/servers, then 10nm makes sense. Currently these are the cases where Intel is implementing/ planning to implement 10nm CPUs.
Like I said always a good sign. Intel's 10nm has been bad for a long while. If it were good to begin with they wouldn't have stayed on 14nm for eternity and you'd be seeing many more 10nm products as opposed to a few chips here and there aimed at low power mobile devices.
Posted on Reply
#8
londiste
watzupken
I feel this is what Intel should have done earlier, instead of continuously trying to fix their 10nm year after year. The exisiting Skylake architecture have been around for 5 generations (Intel 6, 7, 8, 9, and 10), while 14nm was introduced in 2014 with Broadwell. Either they got overconfident or over complacent thinking that their competitors will not catch up with them in terms of fab or CPU architecture, or both.
Neither really. Their communication had broken down and teams were not cooperating or being truthful with one another. Foundry R&D team was constantly saying 10nm will be here soon and other teams took that as a legit statement. With the issues cropping up with 10nm, eventually the reality reached everyone but it took long years of confirmations from foundry that issues are being resolved and 10nm will be here soon. This was hinted at with the way the CEO was taken down and Intel's management reshuffled. Pretty much this exact story was shared by anonymous sources and stitched together by several tech journalists and youtubers. This hit public in the end of 2018. The timeframe in terms of CPUs is Coffee Lake Refresh (8000-9000 series).

Now, they finally knew what was going wrong but fixing it is not simple or quick. The lead time for a major change in CPUs is in years. A new CPU generation takes about a year to year and a half to do if not more. Somewhere in the mid to late 2018 Intel reportedly started a project to untie architecture from manufacturing process - this was never really the case with Intel CPUs as they had tight integration between architecture and manufacturing which had been a benefit for them. This rumored/reported project is part of why Rocket Lake rumors are what they are - Intel should by now be able to port CPU architecture to various manufacturing nodes including a newer architecture like Willow Cove back to 14nm.

If the rumors and reports from anonymous sources are true, this fits quite well with the timeline here.
Posted on Reply
#9
LTUGamer
If they will be realised in the late of this year that looks that current CPUs liftime is very short :D

Anyway if total performance will be higher compared with current chips, single core performance would be amazing
Posted on Reply
#10
Chrispy_
InVasMani
Always a good sign when you gotta backport your design to a older node because the new one sucks that badly.
I remember reading an Nvidia blog saying that they had to back-port designs to 40nm because the 28nm node was constrained, and they faced huge challenges getting the clockspeeds up on the larger node.

I don't know if was related, but perhaps that's why Nvidia's Kepler arhitecture was abandoned so fast and aged poorly - because maybe it wasn't the design Nvidia wanted but an ugly kludge to make the 28nm design work at 40nm...?
Posted on Reply
#11
ppn
LTUGamer
If they will be realised in the late of this year that looks that current CPUs liftime is very short :D

Anyway if total performance will be higher compared with current chips, single core performance would be amazing
Current chips, including all 7nm are already obsolete, 5nm has entered production, 1-2 years to appear in GPU,CPU.
and even 3nm very very soon. I mean it is just such a letdown to buy anything using DDR4 right now.

Just such a waste buying anything on 7,14nm. All the same, old beta testing devices.

The initial performance of Willow doesn't look promising. the physics score 3dmark is lower than 10400F.
I expected ground breaking performance, +40% or something.
Posted on Reply
#12
1d10t
So much for Comet Lake :D
Posted on Reply
#13
bug
Chrispy_
I remember reading an Nvidia blog saying that they had to back-port designs to 40nm because the 28nm node was constrained, and they faced huge challenges getting the clockspeeds up on the larger node.

I don't know if was related, but perhaps that's why Nvidia's Kepler arhitecture was abandoned so fast and aged poorly - because maybe it wasn't the design Nvidia wanted but an ugly kludge to make the 28nm design work at 40nm...?
Happened to AMD, too. TSMC failed to get 22nm yields up, so both players had to implement 22nm designs using the 28nm node.
Nvidia chose to cut back on compute resources and implement TBR, AMD stuck with what they had.
Posted on Reply
#14
jeremyshaw
bug
Happened to AMD, too. TSMC failed to get 22nm yields up, so both players had to implement 22nm designs using the 28nm node.
Nvidia chose to cut back on compute resources and implement TBR, AMD stuck with what they had.
Chrispy_
I remember reading an Nvidia blog saying that they had to back-port designs to 40nm because the 28nm node was constrained, and they faced huge challenges getting the clockspeeds up on the larger node.

I don't know if was related, but perhaps that's why Nvidia's Kepler arhitecture was abandoned so fast and aged poorly - because maybe it wasn't the design Nvidia wanted but an ugly kludge to make the 28nm design work at 40nm...?
You're both talking about the same event. Kepler was 28nm, anyways. Fermi was their only 40nm design and 28nm wouldn't exist for some time, yet.

On 28nm, both Nvidia and AMD chose to keep their designs on 28nm for two generations (Kepler and Maxwell for Nvidia). Nvidia did release 20nm products, but not as consumer GPUs (Tegra X1 in the Switch was an odd 20nm part). Due to TSMC's struggles with 28nm, then 20nm, Nvidia and AMD would both move some (though not all) production away from TSMC for their next consumer products. Nvidia went to former enemy Samsung and AMD went to former spinoff GloFo. TSMC would shore up their 20nm process with FinFETs (sparking a renaming to "16nm," despite keeping the same planar density for the most part) and all was well again.

It should be noted, this whole thing with TSMC and 28nm also drove Apple to go back to Samsung to dual source parts, too.

If I had to guess, TSMC 28nm caused a lot of partners to lose trust and faith in TSMC's delivery. This was around the time TSMC lost one of their process development chiefs to Samsung (now at SMIC), over internal politics at TSMC. Samsung's 14nm, developed under him, was quite successful. It took clients away from TSMC and was licensed to GloFo to rescue that dumpster fire for a few more years.
Posted on Reply
#15
Vya Domus
I honestly lost track completely of these many "something Lake" iterations and which are supposed to be a new architecture on 10nm and which are not.
Posted on Reply
#16
Vayra86
ppn
Current chips, including all 7nm are already obsolete, 5nm has entered production, 1-2 years to appear in GPU,CPU.
and even 3nm very very soon. I mean it is just such a letdown to buy anything using DDR4 right now.

Just such a waste buying anything on 7,14nm. All the same, old beta testing devices.

The initial performance of Willow doesn't look promising. the physics score 3dmark is lower than 10400F.
I expected ground breaking performance, +40% or something.
Sometimes I wonder if you're that guy from Back to the Future :laugh:
Posted on Reply
#17
lexluthermiester
dj-electric
How did i know many things before :)
Well, go on, answer the question, eh?
AnarchoPrimitiv
I could be wrong, but according to the information presented here, doesn't this indicate that the ipc gain is basically nothing if they're purely depending on clocks to make up the performance difference?
That about sums it up. It's not a serious IPC gain, it's just a refresh of clock speeds and a slight refinement to the code pipeline.
Posted on Reply
#20
GoldenX
Finally, after 5 years, something new.
Posted on Reply
#21
HenrySomeone
InVasMani
Always a good sign when you gotta backport your design to a older node because the new one sucks that badly.
Well unfortunately for AMD and their many .... overzealous fans (to say the least) this older node is still mostly better core per core than their spanking new 7nm one except of course in extreme core-count scenarios where 14nm's somewhat higher power consumption at high clocks starts to add up. Team red better bring 5nm EUV as soon as possible or they'll be toast once Intel finally moves to a proper new node...
Posted on Reply
#22
Shatun_Bear
HenrySomeone
Well unfortunately for AMD and their many .... overzealous fans (to say the least) this older node is still mostly better core per core than their spanking new 7nm one except of course in extreme core-count scenarios where 14nm's somewhat higher power consumption at high clocks starts to add up. Team red better bring 5nm EUV as soon as possible or they'll be toast once Intel finally moves to a proper new node...
Better per core?

Nope. Intel is slower multithreaded and draws more power per core. The only thing they win in is applications that use a handful of cores at high frequency, which is increasingly a dying breed in a world of 8 core laptops and 16 core desktops.
Posted on Reply
#23
watzupken
HenrySomeone
Well unfortunately for AMD and their many .... overzealous fans (to say the least) this older node is still mostly better core per core than their spanking new 7nm one except of course in extreme core-count scenarios where 14nm's somewhat higher power consumption at high clocks starts to add up. Team red better bring 5nm EUV as soon as possible or they'll be toast once Intel finally moves to a proper new node...
Overzealous fan? I am not sure who you are implying here when you sound like one yourself.

If you look at a one dimensional metric, then yes, Intel's 14nm is able to clock faster than a TSMC 7nm. But at the expense of it taking more than 2x the power. Clock for clock, Intel's ageing Skylake architecture will lag behind, thus the desperate need to increase clockspeed to retain the single core performance crown. Unfortunately, the earliest you can see a 7nm from Intel could be late next year if not the year after, assuming no delays. The current Intel 10nm appears to be completely messed up. While Intel claims that its turning out better than they expected, I am not sure how high is the expectation now after the many years of delay.
Posted on Reply
#24
lemonadesoda
The takeaway from all this discussion is that while Intel keep “lake” in their naming, the architecture and fab process is full of compromises and yesteryear. So i’ll be Waiting for either intel “Estuary”, “ocean” or “river” depending on which way the Marketing teams decide to go after “lake”. Unfortunately, looks like 2023 or later...
Posted on Reply
#25
HenrySomeone
watzupken
Overzealous fan? I am not sure who you are implying here when you sound like one yourself.

If you look at a one dimensional metric, then yes, Intel's 14nm is able to clock faster than a TSMC 7nm. But at the expense of it taking more than 2x the power. Clock for clock, Intel's ageing Skylake architecture will lag behind, thus the desperate need to increase clockspeed to retain the single core performance crown. Unfortunately, the earliest you can see a 7nm from Intel could be late next year if not the year after, assuming no delays. The current Intel 10nm appears to be completely messed up. While Intel claims that its turning out better than they expected, I am not sure how high is the expectation now after the many years of delay.
It's only 2x the power in borderline situations; certainly not stock vs stock and in most cases also not OCed vs OCed. Actually at around 4 Ghz, Intel's 14nm is still more power efficient at most workloads, let that sink in for a moment.
Posted on Reply
Add your own comment