Tuesday, August 18th 2015

Intel "Skylake" Die Layout Detailed

At the heart of the Core i7-6700K and Core i5-6600K quad-core processors, which made their debut at Gamescom earlier this month, is Intel's swanky new "Skylake-D" silicon, built on its new 14 nanometer silicon fab process. Intel released technical documents that give us a peek into the die layout of this chip. To begin with, the Skylake silicon is tiny, compared to its 22 nm predecessor, the Haswell-D (i7-4770K, i5-4670K, etc).

What also sets this chip apart from its predecessors, going all the way back to "Lynnfield" (and perhaps even "Nehalem,") is that it's a "square" die. The CPU component, made up of four cores based on the "Skylake" micro-architecture, is split into rows of two cores each, sitting across the chip's L3 cache. This is a departure from older layouts, in which a single file of four cores lined one side of the L3 cache. The integrated GPU, Intel's Gen9 iGPU core, takes up nearly as much die area as the CPU component. The uncore component (system agent, IMC, I/O, etc.) takes up the rest of the die. The integrated Gen9 iGPU features 24 execution units (EUs), spread across three EU-subslices of 8 EUs, each. This GPU supports DirectX 12 (feature level 12_1). We'll get you finer micro-architecture details very soon.
Add your own comment

82 Comments on Intel "Skylake" Die Layout Detailed

#26
Fx
JossJoke apart that's exactly the problem: the Red team is not putting up a fight and Intel can do as they please.
Problem is, considering what AMD did with Fury I'm not hoping much for their next FX series (if there is one).
You are half retarded to be laying blame on AMD for putting out a respectable new GPU. Cry more please; plenty of tissues for you fellas.
Posted on Reply
#27
tabascosauz
FxYou are half retarded to be laying blame on AMD for putting out a respectable new GPU. Cry more please; plenty of tissues for you fellas.
Why does his comment regarding R9 Fury qualify him as a half-retard? It's an AMD product, and the Fury X's failure rests squarely on AMD's shoulders. In any case, AMD took a page out of Nvidia's Titan book with the Fury X, only to see it overshadowed by the much more appropriately priced Fury shortly afterwards. Fury is a competitive product. Fury X is a little questionable; making a card that excels only in the niche market of watercooled SFF is not in AMD's best financial interests.
Posted on Reply
#30
Fx
tabascosauzWhy does his comment regarding R9 Fury qualify him as a half-retard? It's an AMD product, and the Fury X's failure rests squarely on AMD's shoulders. In any case, AMD took a page out of Nvidia's Titan book with the Fury X, only to see it overshadowed by the much more appropriately priced Fury shortly afterwards. Fury is a competitive product. Fury X is a little questionable; making a card that excels only in the niche market of watercooled SFF is not in AMD's best financial interests.
The context of the thread is about performance, not money/performance for tiers. In that regards, Nano, Fury and Fury X are all doing just fine.

On the CPU side, yeah, Intel can manipulate the market however they want because they have it like that.
Posted on Reply
#31
vega22
ZenZimZalibenOn the new architecture at 14nm. Eventually this will trickle into Xeons... Already here as Xeon D.
i bet they could double up the cores with the the 14nm fab if they did no igp and qpi shits.

they just wont as it would kill x99 without doubling up the E range first.
Posted on Reply
#32
MxPhenom 216
ASIC Engineer
Seriously, all these people complaining about the IGP to core ratio, news flash, Intel has a platform with no igp and more cores x79/99. Maybe look into that than bitch about mainstream platform not having enough cores. When software is still just barely using more than 2. Nothing has changed.
Posted on Reply
#33
R-T-B
tabascosauzI'm beating a dead horse here, but the intent of HEDT is to satisfy the exact conditions that most of you seem to expect from a top of the product stack mainstream i5/i7.

The complaints won't stop until Intel gets rid of the GPU, and they really won't stop because Intel is not going to take that GPU off. This die is going to power the other desktop i5s and i7s and those are parts intended for powering a 1080P monitor at work without the assistance of a dGPU. Not throwing money into the water for a pointless GPU-less Skylake design that basically gets them no $$$ at all is actually a pretty smart business plan, believe it or not.

Not a lot of praise where it's deserved, I'm afraid. I wouldn't be showering Intel with praise 24/7, but I didn't see any positive comments about the 5820K when it was released, only "only 28 PCIe lanes? What if I need the extra 12 to get maximum performance while getting off every day?" Tried to appease the enthusiasts with a 6-core HEDT part below $400, well, I guess that didn't work out very well, did it? The 5820K wasn't even a forced hand; it could very well have been a carbon copy of the 4820K, just on Haswell, as there are 4-core E5 V3s a-plenty to prove that.

Give people something better, and they'll find something better to complain about.
But HEDT costs too many monies...
Posted on Reply
#34
newtekie1
Semi-Retired Folder
In the space they wasted on shitty barely capable graphics they could have stick another 4 cores...
MxPhenom 216Seriously, all these people complaining about the IGP to core ratio, news flash, Intel has a platform with no igp and more cores x79/99. Maybe look into that than bitch about mainstream platform not having enough cores. When software is still just barely using more than 2. Nothing has changed.
The IGPU is going to be wasted too. And x99 is stupid expensive, anything with 8-cores is $1,000 just for the processor.
Posted on Reply
#35
ppn
At what point does that Skylake look laughable compared to previous similarly sized chip priced around 100$ the 2-core on 32nm 149mm². 355mm² 5960X is clearly not mainstream.

Now imagine that Intel offers instead of just the 4-core 133mm² w/IGP (rerely used), another SKU 8-core 133mm² noIGP true Mainstream-like, the IGP replaced with something useful at no cost at all, except copy.paste some cores. Simple as that.
Posted on Reply
#36
MxPhenom 216
ASIC Engineer
newtekie1In the space they wasted on shitty barely capable graphics they could have stick another 4 cores...



The IGPU is going to be wasted too. And x99 is stupid expensive, anything with 8-cores is $1,000 just for the processor.
And you think that if they add 4 more cores to the 6700k, that the price wouldn't change? HAHAHAHA
Posted on Reply
#37
newtekie1
Semi-Retired Folder
MxPhenom 216And you think that if they add 4 more cores to the 6700k, that the price wouldn't change? HAHAHAHA
It wouldn't be anywhere near $1,000, that is the point.

Though I don't really want 4 more cores, I'd prefer 2 more cores, and 8 more PCI-E lanes.
Posted on Reply
#38
kn00tcn
Sony Xperia SThat's unbelievable - 50% for graphics and 50% of the shared die area for the real stuff.

Of course, this sucks, and it sucks even worse when you know that Intel does nothing at least to try to develop software environment to unleash all those wasted transistors.
40% of the die - to be not used.
make up your mind, is it 50% (it obviously isnt, get a ruler) or is it 40% or what is it

unbelievable, we have a gpu that is capable of video encoding & opencl
Posted on Reply
#39
semantics
newtekie1It wouldn't be anywhere near $1,000, that is the point.

Though I don't really want 4 more cores, I'd prefer 2 more cores, and 8 more PCI-E lanes.
Market segmentation, lack of competition from AMD.
Posted on Reply
#40
ensabrenoir
..........maybe i shouldn't have used a big font.....its sad to see so many that just don't get it......
there is a HEDT line for a reason........ this is mainstream so........

......ITS ALL ABOUT THE IGPU!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!



Posted on Reply
#42
tabascosauz
newtekie1It wouldn't be anywhere near $1,000, that is the point.

Though I don't really want 4 more cores, I'd prefer 2 more cores, and 8 more PCI-E lanes.
This is the issue. And unlike the gentleman from earlier, I will not resort to calling you a half-retard even though I disagree with your views, because we are all civilized around here and are entitled to our own opinions.

It was clearly a gamble, in some respects, for Intel to make the entry HEDT SKU a hex-core. On one side, a reasonable consumer might welcome it as a long-anticipated update to the HEDT lineup, at a reasonable price of ~$380, same as the quad-core 4820K. On the other hand, others will (without having any intention of buying X99) look at the 5820K and ask Intel / complain to other people that the 4790K should have been hex-core too.

Seriously?

If the 6700K was a hex-core with no iGPU, why would the 5820K and 5930K even exist? For the sole purpose of offering more PCIe lanes? Quad-channel DDR4 (oh look, double the bandwidth, must be double the FPS too)? Intel would be shooting itself in the foot.

1. Extra costs into the 6700K. Can't take a Xeon die like the 5820K and 5930K because it's not LGA2011. Need to make a new hex-core die on LGA1151. In the end, no one ends up buying it because the extra R&D costs warrant a higher price tag, and everyone says "DX12 is coming, FX-8350 offers similar performance for about 1/2 the price". Lost lots of money here.

2. 5820K and its successor just die. I mean, what else are these two supposed to do (in addition to the 5930K, also dead)? Next, people boycott LGA2011 and say that "unless the 5960X's successor is a 10-core, I won't buy anything 2011". Jesus. So what is Intel supposed to do now? Lost more money here.

3. Intel slips back into the Pentium days and becomes no better than AMD. These few years have been about forcing the TDP down (don't look at the 6700K, look at the fact that Broadwell desktop was 65W and the other Skylake SKUs are of lower TDP than Haswell). Six-core would mean 140W on LGA1151. There aren't any stock coolers on LGA2011 because none of Intel's stock coolers (except for that one oddity during the Westmere era that was a tower) can handle that kind of heat output. 140W? Better get better VRMs, because those H81M-P33 MOSFETs aren't going to take on a six-core. And "hey look, AMD actually has stock coolers that can handle their CPUs". Lots of confusion and more money lost.

4. What happens to the other LGA1151 SKUs? Did they suddenly cease to exist? 28 PCIe lanes for the 6700K and...what? Would you want to try and explain this disjointed lineup to anyone? "Oh yeah, the top dog in the LGA1151 family is really, really powerful, but although the rest are all 6th Gen Core, they all suck in comparison."

"Intel deserves this dilemma because they cheated by winning over the OEMs anticompetitively" is not a valid argument in this scenario. Those were pre-Netburst eradication days. Prior to Carrizo, there were plenty of opportunities for AMD in the OEM laptop and desktop market, since Trinity/Richland and to a lesser extent, Kaveri APUs offered much more to the average user than a i3/i5.
Posted on Reply
#43
MxPhenom 216
ASIC Engineer
newtekie1It wouldn't be anywhere near $1,000, that is the point.

Though I don't really want 4 more cores, I'd prefer 2 more cores, and 8 more PCI-E lanes.
5820k is available. Similar price to 6700k.
Posted on Reply
#44
Viruzz
ZenZimZalibenLook at the size of that GPU. It takes up more then 1/3 of the chip....I do not understand why a K class chip even has a freaking IGP. 90% of the people interested in K series run dedicated gpu. SO basically 35% of the cost of the chip is going towards an IGP that will never be used. I would much rather pay for 35% more cores. With the IGP gone it would be easy to fit in at least 2 more cores with out expanding the die.
I agree with you, I was thinking the same, but here are some counter points i was thinking to myself:
1. Intel iGPU supports Intel QuickSync, if you compress videos its godsend, its faster then CPU and its faster then GPU accelerated compression like CUDA and whatsitsname AMD equivalent basically its the fastest way to compress videos.
2. It supports DXVA2 and you can offload video decoding when you watch movies, even bluray and 3D
3. You always have a GPU! Things like your GPU died, your testing something, you tried to overclock your GPU with Bios flash and now it works but no signal... etc (basically its good to have an extra GPU)
4. In future (when we might be already dead), DX12 games will support MANY Graphics card mode (dont confuse with SLI) so every GPU, no mater its maker and company will be able to work together, so every gamer with iGPU will get some FREE FPS Boost
5. This is about Broadwell CPUs only, if you take a look at the benchmarks of all 3 modern CPUs, Skylake, Broadwell and Haswell, when they down clocked or overclock to same speed, in some games Broadwell gets like extra 20+ FPS just because it has iGPU with 120MB of fast eDRAM that also works as L4 cache (if iGPU enabled it allocates up to 50% of eDRAM for L4 cache, if you disable iGPU, everything works as L4 cache)
Right now Broadwell is in fact the FASTEST CPU for gaming! Just think 3.3Ghz Broadwell vs 4Ghz 4790K, both get 82FPS in Metro Redux FHD+Max Quality, 97FPS in Tomb Raider FHD+High Quality
Any smart gamer (not me) will buy a Broadwell CPU from a web site like (Digital Lottery) that has guaranteed overclock of 4Ghz and above and his system going to rape both Skylake and Haswell in every game, and im 90% sure that its going to beat the 2 CPUs that come after Skylake unless they have same iGPU technology with eDRAM


Basically what im saying is that you are 10000% right there is absolutely no need for an iGPU on i7 series of processors what we NEED is 128/256MB of Fast L4 Cache!!!
Intel CAN DO IT, we seen it already with Broadwell, just remove the iGPU and keep the cache or even better Increase it to 256Mb, and keep the same price for the CPU.
I dont know how much every part cost but somehow im sure that a GPU is more expensive then some Cache!


End Rant :)
Posted on Reply
#45
FordGT90Concept
"I go fast!1!11!1!"
Uplink10Why?
Why not when you have a dedicated GPU?
MxPhenom 2165820k is available. Similar price to 6700k.
It's about $200 more, $300 more if you include 4 sticks of memory instead of 2.
Viruzz5. This is about Broadwell CPUs only, if you take a look at the benchmarks of all 3 modern CPUs, Skylake, Broadwell and Haswell, when they down clocked or overclock to same speed, in some games Broadwell gets like extra 20+ FPS just because it has iGPU with 120MB of fast eDRAM that also works as L4 cache (if iGPU enabled it allocates up to 50% of eDRAM for L4 cache, if you disable iGPU, everything works as L4 cache)
Right now Broadwell is in fact the FASTEST CPU for gaming! Just think 3.3Ghz Broadwell vs 4Ghz 4790K, both get 82FPS in Metro Redux FHD+Max Quality, 97FPS in Tomb Raider FHD+High Quality
Any smart gamer (not me) will buy a Broadwell CPU from a web site like (Digital Lottery) that has guaranteed overclock of 4Ghz and above and his system going to rape both Skylake and Haswell in every game, and im 90% sure that its going to beat the 2 CPUs that come after Skylake unless they have same iGPU technology with eDRAM
Bare in mind that Broadwell's GPU is substantially larger than Skylake's. Broadwell also has an MCM'd memory chip too. Seems pretty silly how Skylake doesn't best Broadwell in that area because it certainly could at least match it.

I think Intel was reeling from 14nm being so difficult and they're looking for ways to offset their costs. Keeping costs down with Skylake was their answer. 10nm is going to be very, very difficult (and costly) to reach.
Posted on Reply
#46
MxPhenom 216
ASIC Engineer
FordGT90ConceptWhy not when you have a dedicated GPU?


It's about $200 more, $300 more if you include 4 sticks of memory instead of 2.



Bare in mind that Broadwell's GPU is substantially larger than Skylake's. Broadwell also has an MCM'd memory chip too. Seems pretty silly how Skylake doesn't best Broadwell in that area because it certainly could at least match it.

I think Intel was reeling from 14nm being so difficult and they're looking for ways to offset their costs. Keeping costs down with Skylake was their answer. 10nm is going to be very, very difficult (and costly) to reach.
I was talking CPU alone. As an enthusiast, spending some extra cash on ram shouldn't be a problem,seeing how you'll be spending a royal shit ton on the rest of the system anyways.
Posted on Reply
#47
Viruzz
FordGT90ConceptWhy not when you have a dedicated GPU?


It's about $200 more, $300 more if you include 4 sticks of memory instead of 2.



Bare in mind that Broadwell's GPU is substantially larger than Skylake's. Broadwell also has an MCM'd memory chip too. Seems pretty silly how Skylake doesn't best Broadwell in that area because it certainly could at least match it.

I think Intel was reeling from 14nm being so difficult and they're looking for ways to offset their costs. Keeping costs down with Skylake was their answer. 10nm is going to be very, very difficult (and costly) to reach.
I wasn't talking about Integrated GPU performance but the benefits of L4 cache that we get in Broadwell chips that has 120mb of eDRAM for integrated graphics, this eDRAM is also used as additional L4 cache for the CPU.
there are no L4 cache in any other CPU's
Posted on Reply
#48
Sony Xperia S
kn00tcnmake up your mind, is it 50% (it obviously isnt, get a ruler) or is it 40% or what is it
50-50 CPU-GPU and 40 of the total area.
You are the one who need to reread posts and try to understand them better.
kn00tcnunbelievable, we have a gpu that is capable of video encoding & opencl
Take it to where you took it out from. I didn't ask intel for the graphics part. I guess no one has ever asked them to integrate it on the die. They force you and screw you very big time.

Scumbags intel.
Posted on Reply
#49
tabascosauz
Sony Xperia S50-50 CPU-GPU and 40 of the total area.
You are the one who need to reread posts and try to understand them better.



Take it to where you took it out from. I didn't ask intel for the graphics part. I guess no one has ever asked them to integrate it on the die. They force you and screw you very big time.

Scumbags intel.
The hypocrisy is real. Since FM2+ is AMD's modern platform, as AM3+ is only hanging on with 3rd party controllers (and AMD is basically disowning it), let's take a look at the 6800K's die.



Oh dear, 40% of the die is dedicated to the iGPU. What about the 7850K?



*gasp* is that 60%? 70%? Before you start going on about how this is an "APU", it really isn't very different from Intel's mainstream "CPUs". Also, since it's apples to apples, try getting video output out of a FX-8350 and a 990FXA-UD3. Hm? Black screen?
Sony Xperia S50-50 CPU-GPU and 40 of the total area.
I didn't ask intel for the graphics part.
Hey, I didn't ask AMD for a huge iGPU on die. But look at what I got anyway.
Posted on Reply
#50
Sony Xperia S
You are wrong.
AMD offered this innovation with the idea to accelerate the general performance in all tasks.
Thanks to those scumbags intel, nvidia, microsoft, other "developers" and co, it probably won't happen.
Posted on Reply
Add your own comment
Jun 1st, 2024 16:26 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts