• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel "Coffee Lake" Platform Detailed - 24 PCIe Lanes from the Chipset

See... and you hang on points like this (though I didn't call you a fanboy, I just said you lean one way, note) while not addressing the counterpoints (the usefulness of HBM now for the next couple of years/ PCIe lanes making a huge difference)... you are running out of steam and the straw man arguments are getting old.
 
Last edited:
See... and you hang on points like (I didn't call you a fanboy, I just said you lean one way, note) this while not addressing the counterpoints (the usefulness of HBM now for the next couple of years/ PCIe lanes making a huge difference)... you are running out of steam and the straw man arguments are getting old.
Throw a rock at an argument and you're likely to hit one where this happens.
 
Bad, evil, hated, does it matter? You all just declared I'm a fanboy against all logic to define that. It goes entirely against the narrative and you just keep on grabbing it and running with it. And every time I see it I'm like, BUT HOOOOOOOW!?!!?!
In this thread, this all started because you complained about the number of PCIe lanes on announced mainstream chips compared to the number of lanes on HEDT nine years ago. Were you expecting thanks?
 
He should just be banned. Very annoying user. He must have nightmares with nvidia and intel
 
He should just be banned. Very annoying user. He must have nightmares with nvidia and intel
I wouldn't ban him just for being annoying. I'd just like him to open his eyes and post a little more on topic.
 
On other forums I visist you are not allowed to spread hate against some brand constantly. One thing is telling us your opinion with objetive base, other is just spam and show everyone how much you hate some company. And this is what this guy does since Ryzen released. To the point no one takes him seriously anymore.
 
On other forums I visist you are not allowed to spread hate against some brand constantly. One thing is telling us your opinion with objetive base, other is just spam and show everyone how much you hate some company. And this is what this guy does since Ryzen released. To the point no one takes him seriously anymore.
I don't get the feeling he's spreading hate, as much as praises AMD even when there's little reason to do so. But then again, I haven't been paying much attention until recently when he crossed the line into annoying territory.
 
Due to the integrated graphics, Intel only have some much die space before the chips get too costly to make.

Although they're not perfect comparisons, you can see that the I/O takes up a lot more space on the latter and part of this is the PCIe root complex. So there are some trade-offs to be done when it comes to die space used up by whatever part you want to stick inside a chip.

Likewise AMD compromised on Ryzen, although we get 20 usable lanes from the CPU, the chipset is instead crippled by only offering PCIe 2.0 lanes. The performance difference for NVMe between Ryzen and Intel (at least in my case using an Plextor M8PeG drive) is actually in favour of Intel in most tests, surprisingly and this was using a Z170 board.

Regardless, it would be nice to see Intel adding another 4-8 PCIe lanes to the CPU that could be used for storage and say 10Gbps Ethernet.

Well, the 6950X had a 40-lane PCIe root complex. If it is halved, I'm pretty sure Intel can squeeze 20 lanes onto 7700K and still make the die size reasonaly cost effective.
 
Well, the 6950X had a 40-lane PCIe root complex. If it is halved, I'm pretty sure Intel can squeeze 20 lanes onto 7700K and still make the die size reasonaly cost effective.

That's what they do, 16 for GPU's, 4 for DMI...
 
That's what they do, 16 for GPU's, 4 for DMI...

Ahh, my bad. Forgot about the 4 lanes for DMI 3.0. Anyway to add another 4 for storage would only increase the total to 24, still significantly less than 40.
 
I think things have got back on track, but on the off chance anyone wants to derail the topic again I'll post this warning - No more petty squabbling please, I don't mod this section so can't clean up, but I will be issuing thread bans to anyone who can't keep it impersonal and on topic.

I'm looking forward to seeing how coffee lake does, this whole debate about pcie lanes and bandwidth is daft, it's got plenty for mainstream use. As others have said having multiple m.2 drives, multiple gpus and 10gbit ethernet isn't common and absolutely qualifies as enthusiast. That's not the market segment this is aimed at.
 
Even if you had all that...

2 GPUs - 32 lanes
2 m.2 PCI-E drives - 8 lanes
10GbE - 4 lanes

Which brings us to a grand total of 44 lanes. Now... 16 lanes by CPU, 24 by chipset, so... that's 40 lanes. You can drop the lanes requirement here by running the GPUs in 8x/8x, which is plenty even for the most powerful GPUs. Now you need only 28 lanes for all that. Or, you could even run 16x/8x and you would need 36 lanes. Still 4 to go before you hit the total of 40 offered by this platform. Is running one GPU in 8x mode really gonna hurt that bad? I think not.
 
No NVMe Raid? Not viable as the data passes through DMI 3.0?

Well, AMD took care of that.

You lost me, Intel.
 
Like many people need or want that on a mainstream platform?? Well, you i see. :)

Depends... some boards funnel at least one m.2 through cpu avoiding dmi anyway. Just do some reasearch. ;)
 
I'm still wondering why we are all talking about PCIe SSD's when most people are not ever going to see the increased performance of it compared to SATA SSD and it still is more expensive storage. The vast majority doesn't even know it exists. It's a mainstream platform, and it has always had its limitations, its concessions even, this has also always gone down to even how the PCIe lanes are routed.
 
You're sharing the equivalent of 4 PCIe 3.0 lanes by using DMI 3.0 through the PCH though, so a single NVMe device could saturate available bandwidth provided by the PCH. 24 PCIe lanes is nice but, not when it's driven by only 4 lanes worth of bandwidth and NVMe RAID would literally run worse because it would strangle DMI.
 
Correct, when using chipset attached lanes (though it doesnt run worse, there are gains which saturate the DMI bandwidth).

Not with cpu connected lanes, though. Pcie riser card comes to mind. Or a mixed RAID with a cpu connected drive and chipset attached drive (which would cap lower than using all cpu attached lanes
 
Honest question, if I just wanted to have a full 16 lane GPU and one modern high-speed nvme drive, I should be fine with this chipset right?


Long answer short, yes.

For daily usage, 24PCH+16PCIE is more than enough for 2GPU with one M.2 (960PRO) + SATA SSD (850 EVO) setup, but anything goes beyond that configuration I would suggest the HEDT platform.
 
Is it some weird cabal involving Intel and motherboard manufacturers that purposely make the whole PCIe lanes thing confusing? Why make motherboards with 4 16x PCIe slots if the CPUs can't handle them? I'm about to build my next combo gaming/storage server. I'll have a 16x 1180 video card, a 8x pcie 4GB cache raid controller and a 4x10gb 8x pcie network card. Technically all I know is I need 32 PCIe lanes for my cards. No, I don't want to run my video card at 8x any more than I want to drive at in rush hour traffic when the 405 is cut down to 2 lanes. I'm also very curious why there has been virtually no innovation on the standard processor for PCIe lanes. I'm not sure why intel plays games with their PCIe lanes but I'm starting to take some schadenfreude at their latest issues - viruses that attack their processors, no luck in the new lower nm fabrication and falling market share to AMD. Hard to feel a lot of sympathy or loyalty to a company without any transparency to their own customer base.
 
@boe, it sounds like Threadripper is the platform you need.
 
Is it some weird cabal involving Intel and motherboard manufacturers that purposely make the whole PCIe lanes thing confusing? Why make motherboards with 4 16x PCIe slots if the CPUs can't handle them? I'm about to build my next combo gaming/storage server. I'll have a 16x 1180 video card, a 8x pcie 4GB cache raid controller and a 4x10gb 8x pcie network card. Technically all I know is I need 32 PCIe lanes for my cards. No, I don't want to run my video card at 8x any more than I want to drive at in rush hour traffic when the 405 is cut down to 2 lanes. I'm also very curious why there has been virtually no innovation on the standard processor for PCIe lanes. I'm not sure why intel plays games with their PCIe lanes but I'm starting to take some schadenfreude at their latest issues - viruses that attack their processors, no luck in the new lower nm fabrication and falling market share to AMD. Hard to feel a lot of sympathy or loyalty to a company without any transparency to their own customer base.
There's no foul play. Instead of dictating a fixed split between lanes, the manufacturers can configure them however they want (more or less). You'd probably be more unhappy if Intel dictated a fixed configuration instead.

Plus, you're really misinformed. Yes, the number if lanes hasn't gone up much, but the speed of each lane did. And since you can split lanes, you can actually connect a lot more PCIe 2.0 devices at one than you could a few years ago. But all in all, PCIe lanes have already become a scare resource with the advent of NVMe. Luckily we don't need NVMe atm, but I expect this will change in a few years, so we'd better get more PCIe lanes by then.
 
Is it some weird cabal involving Intel and motherboard manufacturers that purposely make the whole PCIe lanes thing confusing? Why make motherboards with 4 16x PCIe slots if the CPUs can't handle them? I'm about to build my next combo gaming/storage server. I'll have a 16x 1180 video card, a 8x pcie 4GB cache raid controller and a 4x10gb 8x pcie network card. Technically all I know is I need 32 PCIe lanes for my cards. No, I don't want to run my video card at 8x any more than I want to drive at in rush hour traffic when the 405 is cut down to 2 lanes. I'm also very curious why there has been virtually no innovation on the standard processor for PCIe lanes. I'm not sure why intel plays games with their PCIe lanes but I'm starting to take some schadenfreude at their latest issues - viruses that attack their processors, no luck in the new lower nm fabrication and falling market share to AMD. Hard to feel a lot of sympathy or loyalty to a company without any transparency to their own customer base.

With the exception of the video card, the other devices don't need to be directly connected to the CPU. The minor latency introduced by going through the chipset first isn't noticed with storage controllers and NIC cards.
 
With the exception of the video card, the other devices don't need to be directly connected to the CPU. The minor latency introduced by going through the chipset first isn't noticed with storage controllers and NIC cards.
One thing I never figured out is when I have two NVMe drives connected to the "southbridge", can they talk directly to each other or do they still have to go through the CPU?
 
One thing I never figured out is when I have two NVMe drives connected to the "southbridge", can they talk directly to each other or do they still have to go through the CPU?

That is the beauty if DMA, it allows devices to talk directly to each other with very minimal interaction with the CPU or System RAM.

Yes, the link back to the CPU can become a bottleneck, but that is a 4GB/s bottleneck. If you have a few NVMe SSDs in RAID the link to the CPU could be the limiting factor. But, will you notice it during actual use? Not likely. You won't be able to get those sweet sweet benchmark scores for maximum sequential read/write. But normal use isn't sequential read/write, so it doesn't really matter. And even still, 4GB/s of read/write speed is still damn fast.

But DMA means that data doesn't have to flow up to the CPU all the time. If you have a 10Gb/s NIC, and a NVMe SSD, data will flow directly from the SSD to the NIC.
 
Last edited:
Back
Top