Sunday, August 7th 2016

Intel "Coffee Lake" Platform Detailed - 24 PCIe Lanes from the Chipset

Intel seems to be addressing key platform limitations with its 8th generation Core "Coffee Lake" mainstream desktop platform. The first Core i7 and Core i5 "Coffee Lake" processors will launch later this year, alongside motherboards based on the Intel Z370 Express chipset. Leaked company slides detailing this chipset make an interesting revelation, that the chipset itself puts out 24 PCI-Express gen 3.0 lanes, that's not counting the 16 lanes the processor puts out for up to two PEG (PCI-Express Graphics) slots.

The PCI-Express lane budget of "Coffee Lake" platform is a huge step-up from the 8-12 general purpose lanes put out by previous-generation Intel chipsets, and will enable motherboard designers to cram their products with multiple M.2 and U.2 storage options, besides bandwidth-heavy onboard devices such as additional USB 3.1 and Thunderbolt controllers. The chipset itself integrates a multitude of bandwidth-hungry connectivity options. It integrates a 10-port USB 3.1 controller, from which six ports run at 10 Gbps, and four at 5 Gbps.
Other onboard controllers includes a SATA AHCI/RAID controller with six SATA 6 Gbps ports. The platform also introduces PCIe storage options (either an M.2 slot or a U.2 port), which is wired directly to the processor. This is drawing inspiration from AMD AM4 platform, in which an M.2/U.2 option is wired directly to the SoC, besides two SATA 6 Gbps ports. The chipset also integrates a WLAN interface with 802.11ac and Bluetooth 5.0, though we think only the controller logic is integrated, and not the PHY itself (which needs to be isolated for signal integrity).

Intel is also making the biggest change to onboard audio standards since the 15-year old Azalia (HD Audio) specification. The new Intel SmartSound Technology sees the integration of a "quad-core" DSP directly into the chipset, with a reduced-function CODEC sitting elsewhere on the motherboard, probably wired using I2S instead of PCIe (as in the case of Azalia). This could still very much be a software-accelerated technology, where the CPU does the heavy lifting with DA/AD conversion.

According to leaked roadmap slides, Intel will launch its first 8th generation Core "Coffee Lake" processors along with motherboards based on the Z370 chipset within Q3-2017. Mainstream and value variants of this chipset will launch only in 2018. Sources: VideoCardz, PCEVA Forums
Add your own comment

119 Comments on Intel "Coffee Lake" Platform Detailed - 24 PCIe Lanes from the Chipset

#51
Prince Valiant
EarthDog said:
See... and you hang on points like (I didn't call you a fanboy, I just said you lean one way, note) this while not addressing the counterpoints (the usefulness of HBM now for the next couple of years/ PCIe lanes making a huge difference)... you are running out of steam and the straw man arguments are getting old.
Throw a rock at an argument and you're likely to hit one where this happens.
Posted on Reply
#52
bug
RejZoR said:
Bad, evil, hated, does it matter? You all just declared I'm a fanboy against all logic to define that. It goes entirely against the narrative and you just keep on grabbing it and running with it. And every time I see it I'm like, BUT HOOOOOOOW!?!!?!
In this thread, this all started because you complained about the number of PCIe lanes on announced mainstream chips compared to the number of lanes on HEDT nine years ago. Were you expecting thanks?
Posted on Reply
#53
Manu_PT
He should just be banned. Very annoying user. He must have nightmares with nvidia and intel
Posted on Reply
#54
bug
Manu_PT said:
He should just be banned. Very annoying user. He must have nightmares with nvidia and intel
I wouldn't ban him just for being annoying. I'd just like him to open his eyes and post a little more on topic.
Posted on Reply
#55
Manu_PT
On other forums I visist you are not allowed to spread hate against some brand constantly. One thing is telling us your opinion with objetive base, other is just spam and show everyone how much you hate some company. And this is what this guy does since Ryzen released. To the point no one takes him seriously anymore.
Posted on Reply
#56
bug
Manu_PT said:
On other forums I visist you are not allowed to spread hate against some brand constantly. One thing is telling us your opinion with objetive base, other is just spam and show everyone how much you hate some company. And this is what this guy does since Ryzen released. To the point no one takes him seriously anymore.
I don't get the feeling he's spreading hate, as much as praises AMD even when there's little reason to do so. But then again, I haven't been paying much attention until recently when he crossed the line into annoying territory.
Posted on Reply
#57
Parn
TheLostSwede said:
Due to the integrated graphics, Intel only have some much die space before the chips get too costly to make.

Although they're not perfect comparisons, you can see that the I/O takes up a lot more space on the latter and part of this is the PCIe root complex. So there are some trade-offs to be done when it comes to die space used up by whatever part you want to stick inside a chip.

Likewise AMD compromised on Ryzen, although we get 20 usable lanes from the CPU, the chipset is instead crippled by only offering PCIe 2.0 lanes. The performance difference for NVMe between Ryzen and Intel (at least in my case using an Plextor M8PeG drive) is actually in favour of Intel in most tests, surprisingly and this was using a Z170 board.

Regardless, it would be nice to see Intel adding another 4-8 PCIe lanes to the CPU that could be used for storage and say 10Gbps Ethernet.
Well, the 6950X had a 40-lane PCIe root complex. If it is halved, I'm pretty sure Intel can squeeze 20 lanes onto 7700K and still make the die size reasonaly cost effective.
Posted on Reply
#58
TheLostSwede
Parn said:
Well, the 6950X had a 40-lane PCIe root complex. If it is halved, I'm pretty sure Intel can squeeze 20 lanes onto 7700K and still make the die size reasonaly cost effective.
That's what they do, 16 for GPU's, 4 for DMI...
Posted on Reply
#59
StefanM
Some "Coffee Leak" at GFXbench :p

https://gfxbench.com/device.jsp?benchmark=gfx40&os=Windows&api=gl&cpu-arch=x86&hwtype=iGPU&hwname=Intel(R)%20UHD%20Graphics%20620&did=50174229&D=Intel(R)%20Core(TM)%20i7-8550U%20CPU%20with%20UHD%20Graphics%20620

https://gfxbench.com/device.jsp?benchmark=gfx40&os=Windows&api=gl&cpu-arch=x86&hwtype=iGPU&hwname=Intel(R)%20UHD%20Graphics%20620&did=52275942&D=Intel(R)%20Core(TM)%20i5-8250U%20CPU%20with%20UHD%20Graphics%20620

https://gfxbench.com/device.jsp?benchmark=gfx40&os=Windows&api=gl&cpu-arch=x86&hwtype=iGPU&hwname=Intel(R)%20HD%20Graphics%20620&did=48651203&D=Intel(R)%20Core(TM)%20i5-8250U%20CPU%20with%20HD%20Graphics%20620

https://gfxbench.com/device.jsp?benchmark=gfx40&os=Windows&api=gl&cpu-arch=x86&hwtype=iGPU&hwname=Intel(R)%20HD%20Graphics%20620&did=49370014&D=Intel(R)%20Core(TM)%20i7-8650U%20CPU%20with%20HD%20Graphics%20620

https://gfxbench.com/device.jsp?benchmark=gfx40&os=Windows&api=gl&cpu-arch=x86&hwtype=iGPU&hwname=Intel(R)%20UHD%20Graphics%20620&did=50817616&D=Intel(R)%20Core(TM)%20i7-8650U%20CPU%20with%20UHD%20Graphics%20620
Posted on Reply
#60
Parn
TheLostSwede said:
That's what they do, 16 for GPU's, 4 for DMI...
Ahh, my bad. Forgot about the 4 lanes for DMI 3.0. Anyway to add another 4 for storage would only increase the total to 24, still significantly less than 40.
Posted on Reply
#61
infrared
I think things have got back on track, but on the off chance anyone wants to derail the topic again I'll post this warning - No more petty squabbling please, I don't mod this section so can't clean up, but I will be issuing thread bans to anyone who can't keep it impersonal and on topic.

I'm looking forward to seeing how coffee lake does, this whole debate about pcie lanes and bandwidth is daft, it's got plenty for mainstream use. As others have said having multiple m.2 drives, multiple gpus and 10gbit ethernet isn't common and absolutely qualifies as enthusiast. That's not the market segment this is aimed at.
Posted on Reply
#62
hat
Enthusiast
Even if you had all that...

2 GPUs - 32 lanes
2 m.2 PCI-E drives - 8 lanes
10GbE - 4 lanes

Which brings us to a grand total of 44 lanes. Now... 16 lanes by CPU, 24 by chipset, so... that's 40 lanes. You can drop the lanes requirement here by running the GPUs in 8x/8x, which is plenty even for the most powerful GPUs. Now you need only 28 lanes for all that. Or, you could even run 16x/8x and you would need 36 lanes. Still 4 to go before you hit the total of 40 offered by this platform. Is running one GPU in 8x mode really gonna hurt that bad? I think not.
Posted on Reply
#63
Сhris
No NVMe Raid? Not viable as the data passes through DMI 3.0?

Well, AMD took care of that.

You lost me, Intel.
Posted on Reply
#64
EarthDog
Like many people need or want that on a mainstream platform?? Well, you i see. :)

Depends... some boards funnel at least one m.2 through cpu avoiding dmi anyway. Just do some reasearch. ;)
Posted on Reply
#65
Vayra86
I'm still wondering why we are all talking about PCIe SSD's when most people are not ever going to see the increased performance of it compared to SATA SSD and it still is more expensive storage. The vast majority doesn't even know it exists. It's a mainstream platform, and it has always had its limitations, its concessions even, this has also always gone down to even how the PCIe lanes are routed.
Posted on Reply
#66
Aquinus
Resident Wat-man
You're sharing the equivalent of 4 PCIe 3.0 lanes by using DMI 3.0 through the PCH though, so a single NVMe device could saturate available bandwidth provided by the PCH. 24 PCIe lanes is nice but, not when it's driven by only 4 lanes worth of bandwidth and NVMe RAID would literally run worse because it would strangle DMI.
Posted on Reply
#67
EarthDog
Correct, when using chipset attached lanes (though it doesnt run worse, there are gains which saturate the DMI bandwidth).

Not with cpu connected lanes, though. Pcie riser card comes to mind. Or a mixed RAID with a cpu connected drive and chipset attached drive (which would cap lower than using all cpu attached lanes
Posted on Reply
#68
nofear2017
dcf-joe said:
Honest question, if I just wanted to have a full 16 lane GPU and one modern high-speed nvme drive, I should be fine with this chipset right?
Long answer short, yes.

For daily usage, 24PCH+16PCIE is more than enough for 2GPU with one M.2 (960PRO) + SATA SSD (850 EVO) setup, but anything goes beyond that configuration I would suggest the HEDT platform.
Posted on Reply
#69
boe
Is it some weird cabal involving Intel and motherboard manufacturers that purposely make the whole PCIe lanes thing confusing? Why make motherboards with 4 16x PCIe slots if the CPUs can't handle them? I'm about to build my next combo gaming/storage server. I'll have a 16x 1180 video card, a 8x pcie 4GB cache raid controller and a 4x10gb 8x pcie network card. Technically all I know is I need 32 PCIe lanes for my cards. No, I don't want to run my video card at 8x any more than I want to drive at in rush hour traffic when the 405 is cut down to 2 lanes. I'm also very curious why there has been virtually no innovation on the standard processor for PCIe lanes. I'm not sure why intel plays games with their PCIe lanes but I'm starting to take some schadenfreude at their latest issues - viruses that attack their processors, no luck in the new lower nm fabrication and falling market share to AMD. Hard to feel a lot of sympathy or loyalty to a company without any transparency to their own customer base.
Posted on Reply
#70
nemesis.ie
@boe, it sounds like Threadripper is the platform you need.
Posted on Reply
#71
bug
boe said:
Is it some weird cabal involving Intel and motherboard manufacturers that purposely make the whole PCIe lanes thing confusing? Why make motherboards with 4 16x PCIe slots if the CPUs can't handle them? I'm about to build my next combo gaming/storage server. I'll have a 16x 1180 video card, a 8x pcie 4GB cache raid controller and a 4x10gb 8x pcie network card. Technically all I know is I need 32 PCIe lanes for my cards. No, I don't want to run my video card at 8x any more than I want to drive at in rush hour traffic when the 405 is cut down to 2 lanes. I'm also very curious why there has been virtually no innovation on the standard processor for PCIe lanes. I'm not sure why intel plays games with their PCIe lanes but I'm starting to take some schadenfreude at their latest issues - viruses that attack their processors, no luck in the new lower nm fabrication and falling market share to AMD. Hard to feel a lot of sympathy or loyalty to a company without any transparency to their own customer base.
There's no foul play. Instead of dictating a fixed split between lanes, the manufacturers can configure them however they want (more or less). You'd probably be more unhappy if Intel dictated a fixed configuration instead.

Plus, you're really misinformed. Yes, the number if lanes hasn't gone up much, but the speed of each lane did. And since you can split lanes, you can actually connect a lot more PCIe 2.0 devices at one than you could a few years ago. But all in all, PCIe lanes have already become a scare resource with the advent of NVMe. Luckily we don't need NVMe atm, but I expect this will change in a few years, so we'd better get more PCIe lanes by then.
Posted on Reply
#72
newtekie1
Semi-Retired Folder
boe said:
Is it some weird cabal involving Intel and motherboard manufacturers that purposely make the whole PCIe lanes thing confusing? Why make motherboards with 4 16x PCIe slots if the CPUs can't handle them? I'm about to build my next combo gaming/storage server. I'll have a 16x 1180 video card, a 8x pcie 4GB cache raid controller and a 4x10gb 8x pcie network card. Technically all I know is I need 32 PCIe lanes for my cards. No, I don't want to run my video card at 8x any more than I want to drive at in rush hour traffic when the 405 is cut down to 2 lanes. I'm also very curious why there has been virtually no innovation on the standard processor for PCIe lanes. I'm not sure why intel plays games with their PCIe lanes but I'm starting to take some schadenfreude at their latest issues - viruses that attack their processors, no luck in the new lower nm fabrication and falling market share to AMD. Hard to feel a lot of sympathy or loyalty to a company without any transparency to their own customer base.
With the exception of the video card, the other devices don't need to be directly connected to the CPU. The minor latency introduced by going through the chipset first isn't noticed with storage controllers and NIC cards.
Posted on Reply
#73
bug
newtekie1 said:
With the exception of the video card, the other devices don't need to be directly connected to the CPU. The minor latency introduced by going through the chipset first isn't noticed with storage controllers and NIC cards.
One thing I never figured out is when I have two NVMe drives connected to the "southbridge", can they talk directly to each other or do they still have to go through the CPU?
Posted on Reply
#74
newtekie1
Semi-Retired Folder
bug said:
One thing I never figured out is when I have two NVMe drives connected to the "southbridge", can they talk directly to each other or do they still have to go through the CPU?
That is the beauty if DMA, it allows devices to talk directly to each other with very minimal interaction with the CPU or System RAM.

Yes, the link back to the CPU can become a bottleneck, but that is a 4GB/s bottleneck. If you have a few NVMe SSDs in RAID the link to the CPU could be the limiting factor. But, will you notice it during actual use? Not likely. You won't be able to get those sweet sweet benchmark scores for maximum sequential read/write. But normal use isn't sequential read/write, so it doesn't really matter. And even still, 4GB/s of read/write speed is still damn fast.

But DMA means that data doesn't have to flow up to the CPU all the time. If you have a 10Gb/s NIC, and a NVMe SSD, data will flow directly from the SSD to the NIC.
Posted on Reply
#75
boe
bug said:
There's no foul play. Instead of dictating a fixed split between lanes, the manufacturers can configure them however they want (more or less). You'd probably be more unhappy if Intel dictated a fixed configuration instead.

Plus, you're really misinformed. Yes, the number if lanes hasn't gone up much, but the speed of each lane did. And since you can split lanes, you can actually connect a lot more PCIe 2.0 devices at one than you could a few years ago. But all in all, PCIe lanes have already become a scare resource with the advent of NVMe. Luckily we don't need NVMe atm, but I expect this will change in a few years, so we'd better get more PCIe lanes by then.
I get that 3.0 is faster than 2.0 but since all my equipment is 3.0 I need more lanes and even if they made it 4.0 my 3.0 equipment wouldn't perform any faster with insufficient lanes. It seems I still don't understand the situation though. Let's say I got a 8700k CPU. On intel's website - it says I have 16PCIe lanes - some web sites it says there are 28 and some say 40. Some people are talking about CPUs having more that are for the motherboard. I don't know how many more there are for the motherboard which go to the slots (if any) and how do I know they aren't used up for resources like USB ports, onboard sata and raid controllers, on board sound cards, on board wifi, on board nic ports. My guess is those alone might be using at least a dozen pcie lanes. So again unless some manufacturers tell us how many lanes are available fixed or otherwise for the slots it seems like a crap shoot at best.
Posted on Reply
Add your own comment