Wednesday, July 25th 2018

Intel Core i9 8-core LGA1151 Processor Could Get Soldered IHS, Launch Date Revealed

The fluid thermal interface material between the processor die and the IHS (integrated heatspreader) has been a particularly big complaint of PC enthusiasts in recent times, especially given that AMD has soldered IHS (believed to be more effective in heat-transfer), across its Ryzen processor line. We're getting reports of Intel planning to give at least its top-dog Core i9 "Whiskey Lake" 8-core socket LGA1151 processor a soldered IHS. The top three parts of this family have been detailed in our older article.

The first Core i9 "Whiskey Lake" SKU is the i9-9900K, an 8-core/16-thread chip clocked between 3.60~5.00 GHz, armed with 16 MB of L3 cache. The introduction of the Core i9 extension to the mainstream desktop segment could mean Intel is carving out a new price point for this platform that could be above the $300-350 price traditionally held by top Core i7 "K" SKUs from the previous generations. In related news, we are also hearing that the i9-9900K could be launched as early as 1st August, 2018. This explains why motherboard manufacturers are in such hurry to release BIOS updates for their current 300-series chipset motherboards.
Source: Coolaler
Add your own comment

79 Comments on Intel Core i9 8-core LGA1151 Processor Could Get Soldered IHS, Launch Date Revealed

#51
hat
Enthusiast
MxPhenom 216Well lets face it. Overclocking is pretty dead with the clock speeds these chips are pushing from Turbo Boost out of the box. Because of this i dont really give a shit whats between the IHS and die.
You aren't a regular user. You use an alternative cooling method that far surpasses the stock cooler. This ensures you get that advertised turbo boost. The majority of users don't even understand how this works, let alone even know you can install a better cooler to get lower than the throttling high temps they don't even know about.

As for overclocking, we have yet to see what these particular chips can do... but aside from that, the bigger point is that Intel killed overclocking too on all but the most expensive chips. I am confident that if we could overclock anything we wanted to like we used to do up until Sandy Bridge, there would be a lot less users on this forum with the 8600k or 8700k, they would be using the much more affordable 8400 or an even lower model. I'd be perfectly happy with one of those $100ish quad core i3 chips running at 5GHz, but Intel killed that possibility.
Posted on Reply
#52
Octopuss
ValantarThat jives pretty well with der8auers recent look into the question of "can you solder an IHS yourself?", but with one major caveat: the difference in complexity and cost between doing a one-off like the process shown there and doing the same on an industrial scale should really not be underestimated. Intel already knows how to to this. They already own the tools and machinery, as they've done this for years. Intel can buy materials at bargain-basement bulk costs. Intel has the engineering expertise to minimize the occurrence of cracks and faults. And it's entirely obvious that an industrial-scale process like this would be fine-tuned to minimize the soldering process causing cracked dice and other failures.
Um, no. It's not about that. Look at the conclusion.
Posted on Reply
#53
ShurikN
OctopussUm, no. It's not about that. Look at the conclusion.
"Intel has some of the best engineers in the world when it comes to metallurgy. They know exactly what they are doing and the reason for conventional thermal paste in recent desktop CPUs is not as simple as it seems."
Doesn't change the fact that they used probably the worst possible tim from Skylake onwards. We probably wouldn't be having this entire TIM vs Solder debate if Intel used higher quality paste.
Intel has some of the best engineers, they also have some of the best people for catering to the investors. The later are higher in hierarchy.

"Micro cracks in solder preforms can damage the CPU permanently after a certain amount of thermal cycles and time."
Nehalem is soldered. It was released 10 years ago. People are still pushing those CPUs hard to this day. They still work fine.

"Thinking about the ecology it makes sense to use conventional thermal paste."
Coming from the guy known for extreme overclocking in which your setup consumes close to 1kW.
Posted on Reply
#54
R0H1T
OctopussUm, no. It's not about that. Look at the conclusion.
Hmm nope, look at the last two pages. Look at AMD, look at Xeon. There is no scientific evidence that soldering harms the CPU, just some minor observations by der8auer.
Posted on Reply
#55
phill
newtekie1Yes, but even with SLI and multiple M.2 drives, the 40 lanes that an 8700K on a Z370 motherboard provides is more than enough. Even if you've got two GPUs, and two high speed M.2 drives, that's only 24 lanes total, leaving another 16 for other devices. There is that much more you can put on a board that needs 16 lanes of PCI-E bandwidth.
Please do excuse my ignorance, but don't GPUs use 16 lanes each (depending on the motherboard if you have dual GPUs in?)

I've only ever used single GPUs on the Zxx/Z1xx boards, but the X99 and such I've used dual up to quad GPUs.. But things where different back in the days of 920 D0's and the Classified motherboards I had then I think..
Posted on Reply
#56
Valantar
OctopussUm, no. It's not about that. Look at the conclusion.
What point in that conclusion don't I address? I suppose the environmental one (which is important! I really, really want electronics production to pollute less!), but how much gold and indium solder is required per CPU? An almost immeasurably small amount of gold (1-3 atoms thick layers don't weigh much), and perhaps a gram or two of indium. Plus teeny-tiny amounts of the other stuff too. Even producing hundreds of thousands of CPUs, the amounts of material required would be very very small for an industrial scale.
phillPlease do excuse my ignorance, but don't GPUs use 16 lanes each (depending on the motherboard if you have dual GPUs in?)

I've only ever used single GPUs on the Zxx/Z1xx boards, but the X99 and such I've used dual up to quad GPUs.. But things where different back in the days of 920 D0's and the Classified motherboards I had then I think..
Mainstream motherboards with SLI/CF support split the single PCIe x16 "PCIe Graphics" lane allocation they have available to x8+x8 when two GPUs are installed in the correct motherboard slots, simply because there are no more PCIe lanes to allocate. If there were more, this wouldnt' happen, but that is only the case on HEDT platforms. As such, CF/SLI on mainstream platforms is (and has always been) x8+x8. CF can even go down to x8+x4+x4.

GPUs can, in essence, run on however many PCIe lanes you allocate to them (up to their maximum, which is 16). As such, you can run a GPU off an x1 slot if you wanted to - and it would work! - but it would be bottlenecked beyond belief (except for cryptomining, which is why they use cheapo x1 PCIe risers to connect as many GPUs as possible to their rigs). However, the difference between x8 and x16 is barely measurable in the vast majority of games, let alone noticeable. It's usually in the 1-2% range.
Posted on Reply
#57
newtekie1
Semi-Retired Folder
Valantar1)Intel doesn't have 2-core desktop dice, only 4- and 6. The rest are harvested/disabled.
Point still stand, different dies, same series.
Valantar2) The difference between Raven Ridge and Summit/Pinnacle Ridge is far bigger than between any mainstream Intel chips, regardless of differences in core count. The Intel 4+2 and 6+2 dice are largely identical except for the 2 extra cores. All Summit/Pinnacle Ridge chips (and Threadripper) are based off the same 2-CCX iGPU-less die (well, updated/tuned for Pinnacle Ridge and the updated process node, obviously). Raven Ridge is based off an entirely separate die design with a single CCX, an iGPU, and a whole host of other uncore components belonging to that. The difference is comparable to if not bigger than the difference between the ring-bus MSDT and the mesh-interconnect HEDT Intel chips.
The CPU core design is still identical. The die has one CCX removed and a GPU added, but the CPU cores are identical to Ryzen cores.
Valantar3) If "Same socket, in the same product stack" is the rule, do you count Kaby Lake-X as the same series as Skylake-X?
Yep.
Valantar4) "Same product stack" is also grossly misleading. From the way you present this, Intel has one CPU product stack - outside of the weirdly named low-end core-based Pentium and Celerons, that is, which seem to "lag" a generation or two in their numbering. They all use the same numbering scheme, from mobile i3s to HEDT 18-core i9s. But you would agree that the U, H and other suffixes for mobile chips place them in a different product stack, no? Or would you say that Intel has no mobile product stack? 'Cause if you think they do, then you have to agree that the G suffix of the desktop RR APUs also makes that a separate product stack. Not to mention naming: Summit and Pinnacle Ridge are "Ryzen". Then there's "Ryzen Threadripper". Then there's "Ryzen with Vega Graphics". Subsets? Sure. Both are. But still separate stacks.
The product stack is the current generation of processors on the same socket.

Intel's current product stack on the 1151(300) socket ranges from the Celeron G4900 all the way up to the 8700K. The mobile processors are a completely different series and product stack.
ValantarYou're right that DMA alleviates this somewhat, but that depends on the workload. Is all you do with your SSDs copying stuff between them? If not, the data is going to go to RAM or CPU. If you have a fast NIC, have you made sure that the drive you're downloading to/uploading from is connected off the PCH and not the CPU? 'Cause if not, you're - again - using that QPI link. And so on, and so on. The more varied your load, the more that link is being saturated. And again, removing the bottleneck almost entirely would not be difficult at all - Intel would just have to double the lanes for the uplink. This would require a tiny increase in die space on the CPUs and PCHes, and somewhat more complex wiring in the motherboard, but I'm willing to bet the increase in system cost would be negligible.
Yes, removing the limit would be easy, but it hasn't become necessary. Even on their HEDT platform, the link hasn't become an issue. The fact is with the exception of some very extreme fringe cases, the QPI link between the chipset and the CPU isn't a bottleneck. The increased cost would be negligible, but so would the increase in performance.

The drive is never connected to the CPU on the mainstream platform. Data will flow from the NIC directly to the drive through the PCH. The QPI link to the CPU never comes into play. And even if it did, a 10Gb NIC isn't coming close to maxing out the QPI link. It would use about 1/4th of the QPI link.
ValantarApparently you're not familiar with Intel HSIO/Flex-IO or the feature sets of their chipsets. You're partially right that USB is provided - 2.0 and 3.0, but not 3.1 except for the 300-series excepting the Z370 (which is really just a rebranded Z270). Ethernet is done through separate controllers over PCIe, and SATA shares lanes with PCIe. Check out the HSIO lane allocation chart from AnandTech's Z170 walkthrough from the Skylake launch - the only major difference between this and Z270/370 is the addition of a sixth PCIe 3.0x4 controller, for 4 more HSIO lanes. How they can be arranged/split (and crucially, how they can not) works exactly the same. Note that Intel's PCH spec sheets (first picture here) always say "up to" X number of USB ports/PCIe lanes and so on - due to them being interchangeable. Want more than 6 USB 3.0 ports? That takes away an equivalent amount of PCIe lanes. Want SATA ports? All of those occupy RST PCIe lanes, though at least some can be grouped on the same controller. Want dual Ethernet? Those will eat PCIe lanes too. And so on. The moral of the story: An implemented Intel chipset does not have the amount of available PCIe lanes that they advertise that it has.
Yes, you are correct, I was wrong about those bits not using PCI-E lanes from the PCH. But it still leaves 16 or 17 dedicated PCI-E lanes coming off the PCH even with SATA, USB3.0, and Gb NIC. So combined with the lanes from the CPU you still get 32 PCI-E lanes, more than enough for two graphics card, two M.2 drives, and extra crap.
Posted on Reply
#58
hat
Enthusiast
OctopussUm, no. It's not about that. Look at the conclusion.
The only valid point there is the rarity of Indium, and environmental impacts and whatever other crap likely goes into sourcing it. The point about the longevity of solder, as well as the difficulty in doing it, makes zero sense. Intel has been soldering CPUs (which still work) for decades. They can do it right and there aren't adverse effects from it.
phillPlease do excuse my ignorance, but don't GPUs use 16 lanes each (depending on the motherboard if you have dual GPUs in?)
More like "up to" 16 lanes. 16 lanes is optimal, but x8 or even x4 (if PCI-E 3.0) works just as well. You can take away lanes from the GPU to allocate them elsewhere... like that mountain of NICs and SSDs everybody seems to need.
ValantarWhat point in that conclusion don't I address? I suppose the environmental one (which is important! I really, really want electronics production to pollute less!), but how much gold and indium solder is required per CPU? An almost immeasurably small amount of gold (1-3 atoms thick layers don't weigh much), and perhaps a gram or two of indium. Plus teeny-tiny amounts of the other stuff too. Even producing hundreds of thousands of CPUs, the amounts of material required would be very very small for an industrial scale.
As I said before, a valid concern, likely the only one. That said, if they were gonna use paste, they could have done better than the garbage that's on them now. Surely they could have struck a deal with the CoolLab people or something. While inconvenient, I can always do that myself. I don't think I should have to and I would still prefer solder, but at least I can remedy that. I can also install a better cooler, like the h70 I'm using now. But there's still a very large group full of regular people who aren't really getting what they should be thanks to a nasty cocktail consisting of crappy stock cooling solutions, crappy paste rather than solder (or at least a superior paste), and sneaky marketing schemes including the words "up to".

There's no good reason to deliberately make them run hot. Sure it might "work", it might be "in spec", but they can do better than that. I've seen many times in the past that the lifetime of electronics is positively affected when they run nice and cool. I've also seen the lifetime of electronics affected negatively due to poorly designed, hot running and/or insufficiently cooled garbage.
Posted on Reply
#59
Hood
VulkanBrosWhy not just buy an AMD CPU.......
Because the soldered i9 will be much faster than anything AMD has, for at least the next year. It should be the best upgrade since Sandy. Possible OC to 5.5 stable on water? I think the i9-9900K will be the best-selling CPU in 2018 (definitely if solder is true, and maybe even with paste).
Posted on Reply
#60
newtekie1
Semi-Retired Folder
HoodBecause the soldered i9 will be much faster than anything AMD has, for at least the next year. It should be the best upgrade since Sandy. Possible OC to 5.5 stable on water? I think the i9-9900K will be the best-selling CPU in 2018 (definitely if solder is true, and maybe even with paste).
Ha! It's funny that people think high end and overclocking products make up a large portion of the products sold...

Solder vs TIM doesn't matter to 99% of the buyers.
Posted on Reply
#61
hat
Enthusiast
newtekie1Ha! It's funny that people think high end and overclocking products make up a large portion of the products sold...

Solder vs TIM doesn't matter to 99% of the buyers.
Well, it kinda does, just not in such an obvious way. 99% of the buyers probably aren't even aware this "solder vs TIM" debacle even exists... but they are affected by it when their overheating, Colgate covered processor complete with dinky coaster cooler thermal throttles to "base" clocks or worse. "Mister Upto, I see you're up to no good again!"

That said, I reiterate that though I've been slamming Intel pretty hard over the thermal paste, they should also include better coolers. I think, even if soldered, that coaster cooler is okay for the really low power chips like Celeron or Pentium, but once you get up to i3 territory, they should at least use the full height cooler.

It's a shame... Intel has some great silicon, but they ruin it with their poor design choices when it comes to cooling. It's like they made a Ferrari engine, and put it in this:

Posted on Reply
#62
StrayKAT
hatWell, it kinda does, just not in such an obvious way. 99% of the buyers probably aren't even aware this "solder vs TIM" debacle even exists... but they are affected by it when their overheating, Colgate covered processor complete with dinky coaster cooler thermal throttles to "base" clocks or worse. "Mister Upto, I see you're up to no good again!"

That said, I reiterate that though I've been slamming Intel pretty hard over the thermal paste, they should also include better coolers. I think, even if soldered, that coaster cooler is okay for the really low power chips like Celeron or Pentium, but once you get up to i3 territory, they should at least use the full height cooler.

It's a shame... Intel has some great silicon, but they ruin it with their poor design choices when it comes to cooling. It's like they made a Ferrari engine, and put it in this:

I was actually surprised during some recent shopping that Intel actually has a 2017 heatsink in their catalog, put on the suggestions pages for the Core-X. No way in hell was I going to buy that thing... even though I'm not doing any overclocking.
Posted on Reply
#63
phill
hatWell, it kinda does, just not in such an obvious way. 99% of the buyers probably aren't even aware this "solder vs TIM" debacle even exists... but they are affected by it when their overheating, Colgate covered processor complete with dinky coaster cooler thermal throttles to "base" clocks or worse. "Mister Upto, I see you're up to no good again!"

That said, I reiterate that though I've been slamming Intel pretty hard over the thermal paste, they should also include better coolers. I think, even if soldered, that coaster cooler is okay for the really low power chips like Celeron or Pentium, but once you get up to i3 territory, they should at least use the full height cooler.

It's a shame... Intel has some great silicon, but they ruin it with their poor design choices when it comes to cooling. It's like they made a Ferrari engine, and put it in this:

That would be one proper sleeper car as well!! :) Intel ain't going to be getting my money....
Posted on Reply
#64
hat
Enthusiast
StrayKATI was actually surprised during some recent shopping that Intel actually has a 2017 heatsink in their catalog, put on the suggestions pages for the Core-X. No way in hell was I going to buy that thing... even though I'm not doing any overclocking.
Interesting. No cooler included with those HEDT chips, but they're offering to sell one separately? I'm sure you could do better by far with a Hyper 212 or something. Seems to be the economical choice for improved cooling these days, much like the Arctic Freezer 64 Pro (AMD)/7(intel) was when I got into the game.
phillThat would be one proper sleeper car as well!! :) Intel ain't going to be getting my money....
I know... I was rolling around with the idea of upgrading to an i5 8400, or more likely, an 8600k. I don't like their business practices, but as I mentioned earlier, it's still damn good silicon... and I'd eat the K series shit sandwich to stay off the locked platform (not just for more MHz, but faster RAM as well, which also never used to be an issue)... but AMD is looking damn good anymore. The Ryzen refresh currently available seems to do better with memory, where the OG Ryzen lineup suffered from memory compatibility issues... and with "Zen 2" around the corner, which should close, eliminate, or maybe even surpass the current gap in raw performance... yeah, AMD's looking good. Even if Intel reverses everything I've been slamming them for, AND increases performance even more AMD just might get my money anyway, just out of principle.
Posted on Reply
#65
StrayKAT
hatInteresting. No cooler included with those HEDT chips, but they're offering to sell one separately? I'm sure you could do better by far with a Hyper 212 or something. Seems to be the economical choice for improved cooling these days, much like the Arctic Freezer 64 Pro (AMD)/7(intel) was when I got into the game.



I know... I was rolling around with the idea of upgrading to an i5 8400, or more likely, an 8600k. I don't like their business practices, but as I mentioned earlier, it's still damn good silicon... and I'd eat the K series shit sandwich to stay off the locked platform (not just for more MHz, but faster RAM as well, which also never used to be an issue)... but AMD is looking damn good anymore. The Ryzen refresh currently available seems to do better with memory, where the OG Ryzen lineup suffered from memory compatibility issues... and with "Zen 2" around the corner, which should close, eliminate, or maybe even surpass the current gap in raw performance... yeah, AMD's looking good. Even if Intel reverses everything I've been slamming them for, AND increases performance even more AMD just might get my money anyway, just out of principle.
Doh. Wait, I was wrong. It's a Thermal Solution Specification. I read it wrong.

Still though, they listed a dinky thermal product for Kaby Lake, if I recall. Which is still crazy.
Posted on Reply
#66
Valantar
newtekie1Yes, you are correct, I was wrong about those bits not using PCI-E lanes from the PCH. But it still leaves 16 or 17 dedicated PCI-E lanes coming off the PCH even with SATA, USB3.0, and Gb NIC. So combined with the lanes from the CPU you still get 32 PCI-E lanes, more than enough for two graphics card, two M.2 drives, and extra crap.
It's too late for me to answer your post in full (I'll do that tomorrow), but for now I'll say this: look into how these lanes can (and can't) be split. There are not 16 or 17 free lanes, as you can't treat them as individually addressable when other lanes on the controller are occupied. At best, you have 3 x4 groups free. That's the best case scenario. Which, of course, is enough. But that's not usually the case. Often, some of those lanes are shared between motherboard slots, m.2 slots or SATA slots for example, and you'll have to choose which one you want. Want NVMe? That disables the two SATA ports shared with it. Want an AIC in the x1 slot on your board? Too bad, you just disabled an m.2 slot. And so on.
Posted on Reply
#67
Blueberries
hatYou aren't a regular user. You use an alternative cooling method that far surpasses the stock cooler. This ensures you get that advertised turbo boost. The majority of users don't even understand how this works, let alone even know you can install a better cooler to get lower than the throttling high temps they don't even know about.
Those coolers are available for $20.

The majority of users don't even know what CPU they have.
Posted on Reply
#68
hat
Enthusiast
Sure, a $20 cooler could likely do the job, but as you say, the majority of users don't even know what CPU they have... let alone what temp it's running at, or the fact that the cooler they have is likely insufficient and their CPU is throttling, or at least not boosting to where it should be.
Posted on Reply
#69
RealNeil
Outback BronzeIn any case, I'm starting to enjoy decapitating Intel's latest CPU's : )
Ha! Me too!

But I was sweating bullets when I opened my 7900X. The delid turned out great.
Posted on Reply
#70
hat
Enthusiast
The only benefit there is to being able to do it is you can run direct die if you're brave enough... I could delid a CPU without worrying about it too much, but going direct die I'd have a hard time with.
Posted on Reply
#71
Assimilator
ValantarIt's too late for me to answer your post in full (I'll do that tomorrow), but for now I'll say this: look into how these lanes can (and can't) be split. There are not 16 or 17 free lanes, as you can't treat them as individually addressable when other lanes on the controller are occupied. At best, you have 3 x4 groups free. That's the best case scenario. Which, of course, is enough. But that's not usually the case. Often, some of those lanes are shared between motherboard slots, m.2 slots or SATA slots for example, and you'll have to choose which one you want. Want NVMe? That disables the two SATA ports shared with it. Want an AIC in the x1 slot on your board? Too bad, you just disabled an m.2 slot. And so on.
It's so refreshing to find someone else who understands how PCIe lanes and FlexIO actually works on Intel chipsets. I guess AnandTech can produce as many easy-to-comprehend articles as they want, but people will still choose to not read those articles, then argue from a point of ignorance.
Posted on Reply
#73
VulkanBros
HoodBecause the soldered i9 will be much faster than anything AMD has, for at least the next year. It should be the best upgrade since Sandy. Possible OC to 5.5 stable on water? I think the i9-9900K will be the best-selling CPU in 2018 (definitely if solder is true, and maybe even with paste).
May well be - we will have to see - it will be far more interesting to see the price/performance ratio.

It is a bit funny (IMO) - this hole debate, in particular the part, that Intel is so and so much faster. Yes, may be, in synthetic test and benchmarks - but when it comes to real life,
who the f... can see, if the game is running with 2 fps more on an Intel vs. AMD?? It´s like all people are doing nothing but running benchmarks all day long.....
Posted on Reply
#74
Valantar
newtekie1The CPU core design is still identical. The die has one CCX removed and a GPU added, but the CPU cores are identical to Ryzen cores.

The product stack is the current generation of processors on the same socket.

Intel's current product stack on the 1151(300) socket ranges from the Celeron G4900 all the way up to the 8700K. The mobile processors are a completely different series and product stack.
Well, it seems like you've chosen where to draw your arbitrary delineator (socket+a very specific understanding of architecture), and can't be convinced otherwise. Oh well. I don't see a problem with two different product stacks existing on the same socket/platform - you do. That's your right, no matter how weird I find it. But at least be consistent: You say KBL-X is a part of a singular HEDT product stack, yet it's based on a completely different die, with fundamental design differences (including core layouts and low-level caches, plus the switch from a ring bus to a mesh interconnect between cores for SKL-X). SKL-X is not the same architecture as KBL-X. KBL-X is the same architecture as regular Skylake (just with some optimizations), but SKL and SKL-X are quite different. It's starting to sound more and more like your definition is "same socket, same numbered generation", with the architecture and feature set being irrelevant. I strongly disagree with that.
newtekie1Yes, removing the limit would be easy, but it hasn't become necessary. Even on their HEDT platform, the link hasn't become an issue. The fact is with the exception of some very extreme fringe cases, the QPI link between the chipset and the CPU isn't a bottleneck. The increased cost would be negligible, but so would the increase in performance.

The drive is never connected to the CPU on the mainstream platform. Data will flow from the NIC directly to the drive through the PCH. The QPI link to the CPU never comes into play. And even if it did, a 10Gb NIC isn't coming close to maxing out the QPI link. It would use about 1/4th of the QPI link.
You're right, I got AMD and Intel mixed up for a bit there, as only AMD has CPU PCIe lanes for storage. Still, you have a rather weird stance here, essentially arguing that "It's not Intel's (or any other manufacturer's) job to push the envelope on performance." Should we really wait until this becomes a proper bottleneck, then complain even more loudly, before Intel responds? That's an approach that only makes sense if you're a corporation which has profit maximization as its only interest. Do you have the same approach to CPU or GPU performance too? "Nah, they don't need to improve anything before all my games hit 30fps max."

As for a 10GbE link not saturating the QPI link, you're right, but need to take into account overhead and imperfect bandwidth conversions between standards (10GbE controllers have a max transfer speed of ~1000MB/s, which only slightly exceeds the 985MB/s theoretical max of a PCI 3.0 x1 connection, but the controllers require a x4 connection still - there's a difference between internal bandwidth requirements and external transfer speeds) and bandwidth issues when transferring from multiple sources across the QPI link. You can't simply bunch all the data going from the PCH to the CPU or RAM together regardless of its source - the PCH is essentially a PCIe switch, not some magical data-packing controller (as that would add massive latency and all sorts of decoding issues). If the QPI link is transferring data from one x4 device, it'll have to wait to transfer data from any other device. Of course, switching happens millions if not billions of times a second, so "waiting" is a weird term to use, but it's the same principle as non-MU-MIMO Wifi: you might have a 1,7Gb/s max theoretical speed, but when the connection is constantly rotating between a handful of devices, performance drops significantly both due to each device only having access to a fraction of that second, and also some performance being lost in the switching. PCIe is quite latency-sensitive, so PCIe switches prioritize fast switching over fancy data-packing methods.

And, again: unless your main workload is copying data back and forth between drives, it's going to have to cross the QPI link to reach RAM and the CPU. If you're doing video or photo editing, that's quite a heavy load. Same goes for any type of database work, compiling code, and so on. For video work, a lot of people use multiple SSDs for increased performance - not necessarily in RAID, but as scratch disks and storage disks, and so on. We might not be quite at the point where we're seeing the limitations of the QPI link, but we are very, very close.

My bet is that Intel is hoping to ride this out until PCIe 4.0 or 5.0 reach the consumer space (the latter seems to be the likely one, given that it's arriving quite soon after 4.0, which has barely reached server parts), so that they won't have to spend any extra money on doubling the lane count. And they still might make it, but they're riding a very fine line here.
newtekie1Yes, you are correct, I was wrong about those bits not using PCI-E lanes from the PCH. But it still leaves 16 or 17 dedicated PCI-E lanes coming off the PCH even with SATA, USB3.0, and Gb NIC. So combined with the lanes from the CPU you still get 32 PCI-E lanes, more than enough for two graphics card, two M.2 drives, and extra crap.
As I said in my previous post: you can't just lump all the lanes together like that. The PCH has six x4 controllers, four of which (IIRC) support RST - and as such, either SATA or NVMe. The rest can support NVMe drives, but are never routed as such (as an m.2 slot without RST support, yet still coming off the chipset would be incredibly confusing). If your motherboard only has four SATA ports, then that occupies one of these RST ports, with three left. If there's more (1-4), two are gone - you can't use any remaining PCIe lanes for an NVMe drive when there are SATA ports running off the controller. A 10GbE NIC would occupy one full x4 controller - but with some layout optimization, you could hopefully keep that off the RST-enabled controllers. But most boards with 10GbE also have a normal (lower power) NIC, which also needs PCIe - which eats into your allocation. Then there's USB 3.1G2 - which needs a controller on Z370, is integrated on every other *3** chipset, but requires two PCIe lanes per port (most controllers are 2-port, PCIe x4) no matter what (the difference is whether the controller is internal or external). Then there's WiFi, which needs a lane, too.

In short: laying out all the connections, making sure nothing overlaps too badly, that everything has bandwidth, and that the trade-offs are understandable (insert an m.2 in slot 2, you disable SATA4-7, insert an m.2 in slot 3, you disable PCIe_4, and so on) is no small matter. The current PCH has what I would call the bare minimum for a full-featured high-end mainstream motherboard today. They definitely don't have PCIe to spare. And all of these devices need to communicate with and transfer data to and from the CPU and RAM.[/QUOTE]
Posted on Reply
#75
las
VulkanBrosWhy not just buy an AMD CPU.......
Because they suck for high fps gaming.
Posted on Reply
Add your own comment
Apr 25th, 2024 08:06 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts