• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core i7 "Broadwell-E" Lineup to Feature Four SKUs

Yes but the 4 lanes available are generally tied to an M.2 slot and are gimped to 2.0 4x, as I was saying how the manufacturer defines how these PCH lanes are allocated. For instance a Gen 3 Ultra slot will share lanes from the CPU side rather than PCH because the lanes in the PCH are tied to other features on the board. One example being http://www.newegg.com/Product/Produ...57501&cm_re=asrock_z97-_-13-157-501-_-Product where SLI (dual 8x) isn't an option, has no Ultra M.2, and the only M.2 slot on the board is Gen 2 coming off the PCH.

Your example is a Z97 board.
 
Yes but the 4 lanes available are generally tied to an M.2 slot and are gimped to 2.0 4x, as I was saying how the manufacturer defines how these PCH lanes are allocated. For instance a Gen 3 Ultra slot will share lanes from the CPU side rather than PCH because the lanes in the PCH are tied to other features on the board. One example being http://www.newegg.com/Product/Produ...57501&cm_re=asrock_z97-_-13-157-501-_-Product where SLI (dual 8x) isn't an option, has no Ultra M.2, and the only M.2 slot on the board is Gen 2 coming off the PCH.

Check Intel own document at http://ark.intel.com/products/90591/Intel-GL82Z170-PCH to find out whether Z170 provides 20 PCIe 3.0 lanes or not yourself.

The PCIe lanes from 5820K are used for USB 3.0/3.1 and M.2 slots too just like how Z170 is wired to various onboard chips. 5930K and 5960X do provide more usable lanes but at a higher cost.

In terms of pure functionalities and performance, 9 series chipsets can't even hold a candle to 100 series. So X99 by itself is no exception here.

BTW your link points to a Z97 board, not Z170.
 
Your example is a Z97 board.
This is actually the board I wanted to link.
http://www.newegg.com/Product/Produ...132574&cm_re=asus_z170-_-13-132-574-_-Product

Check Intel own document at http://ark.intel.com/products/90591/Intel-GL82Z170-PCH to find out whether Z170 provides 20 PCIe 3.0 lanes or not yourself.

The PCIe lanes from 5820K are used for USB 3.0/3.1 and M.2 slots too just like how Z170 is wired to various onboard chips. 5930K and 5960X do provide more usable lanes but at a higher cost.

In terms of pure functionalities and performance, 9 series chipsets can't even hold a candle to 100 series. So X99 by itself is no exception here.

BTW your link points to a Z97 board, not Z170.
I never disregarded whether or not it supplied 20 split Gen 3 lanes, I said what's usable to the user after the manufacturer allocated said lanes from the PCH.

Can't hold a candle? X99 can supply more lanes without the additional help of a switching interconnect (PCH), sure with the associated cost, but still without a doubt has more available bandwidth to the user. Don't gloss over the point to hold your own argument. Skylake is just as limited as Haswell and previous mainstream platforms in terms of PCI-E lanes and bandwidth available. Skylake (1151) is taking 4 lanes from the CPU and splitting said lanes for the associated features on the board whereas if I was to buy a 5930k right now (or even a 5820k) I'd still have more raw/available lanes/bandwidth to use at my disposal. 40 vs 20 isn't an argument, it's fact. On X99 (or even X79 like my current board) if we start adding switching chips we have even more available switched bandwidth and not trying to split 4 lanes but instead 28-40.

This is where we go back to my original comment saying 16+4 which is exactly what it is. As the corrected board I linked, it can't do SLI (which a license requires 8x to two slots), all 16 lanes go to the first PCI-E slot and the M.2 is left with Gen 2 4x which I'd be willing to bet if the M.2 is used it kills the second full size slot.
 
This is actually the board I wanted to link.
http://www.newegg.com/Product/Produ...132574&cm_re=asus_z170-_-13-132-574-_-Product

I never disregarded whether or not it supplied 20 split Gen 3 lanes, I said what's usable to the user after the manufacturer allocated said lanes from the PCH.

Can't hold a candle? X99 can supply more lanes without the additional help of a switching interconnect (PCH), sure with the associated cost, but still without a doubt has more available bandwidth to the user. Don't gloss over the point to hold your own argument. Skylake is just as limited as Haswell and previous mainstream platforms in terms of PCI-E lanes and bandwidth available. Skylake (1151) is taking 4 lanes from the CPU and splitting said lanes for the associated features on the board whereas if I was to buy a 5930k right now (or even a 5820k) I'd still have more raw/available lanes/bandwidth to use at my disposal. 40 vs 20 isn't an argument, it's fact. On X99 (or even X79 like my current board) if we start adding switching chips we have even more available switched bandwidth and not trying to split 4 lanes but instead 28-40.

This is where we go back to my original comment saying 16+4 which is exactly what it is. As the corrected board I linked, it can't do SLI (which a license requires 8x to two slots), all 16 lanes go to the first PCI-E slot and the M.2 is left with Gen 2 4x which I'd be willing to bet if the M.2 is used it kills the second full size slot.

You seem to have facts and various technical terms all mixed up and confused. X99 is a PCH, so is Z170. You seem to think X99 and PCH are two separate chips (you probably got confused with PLX chips). X99 by itself cannot even touch Z170 in terms of raw features. It only provides 8 PCIe 2.0 lanes and a DMI 2.0 interface whereas the latter provides 20 PCIe 3.0 lanes and a DMI 3.0 interface. About the only thing X99 has an edge is 10 SATA ports v.s. 6.

In one paragraph you acknowleged Z170 PCH is PCIe 3.0 capable yet you said the M.2 slot wired to it was only 2.0 x4 later. Have you realised you contradicted yourself? M.2 ports on all Z170 board are rated for 3.0 x4 32Gb/s. Can 2.0 x4 provide 32Gb/s bandwidth?

You used that bottom of the pack Z170 board as the sole example to prove that SLI is only possible with X99? Take a look at this: http://www.newegg.com/Product/Produ...=asus_z170_pro_gaming-_-13-132-567R-_-Product . BTW this is only a mid-end board.
 
Sigh we were talking about CPU PCIx lanes not motherboards lanes:banghead:
 
hmm 10 cores :respect:. 10 cores vs. 4 cores 7 year old. Maybe its time to let go of my old I7 920. Its a hard choise.
 
Yes but the 4 lanes available are generally tied to an M.2 slot and are gimped to 2.0 4x, as I was saying how the manufacturer defines how these PCH lanes are allocated. For instance a Gen 3 Ultra slot will share lanes from the CPU side rather than PCH because the lanes in the PCH are tied to other features on the board. One example being http://www.newegg.com/Product/Produ...57501&cm_re=asrock_z97-_-13-157-501-_-Product where SLI (dual 8x) isn't an option, has no Ultra M.2, and the only M.2 slot on the board is Gen 2 coming off the PCH.

Please... learn to read... or at least Google before making uneducated comments.

z170-chipset-block-diagram-rwd.png
 
Sigh we were talking about CPU PCIx lanes not motherboards lanes:banghead:

We were, then someone got confused and insisted on things without actually checking facts and it ended up being a discussion about chipset lanes :D
 
I bet this means the same. Total cache = L3.
On Intel Core architecture, the higher level caches include the lower level ones (L3 always has a copy of the contents of L2 and so on). So this probably means "usable cache", which is the same as L3.

Well, they seem to call it "Smart Cache" these days, so it's L2 and L3 all bundled together...
 
You seem to have facts and various technical terms all mixed up and confused. X99 is a PCH, so is Z170. You seem to think X99 and PCH are two separate chips (you probably got confused with PLX chips). X99 by itself cannot even touch Z170 in terms of raw features. It only provides 8 PCIe 2.0 lanes and a DMI 2.0 interface whereas the latter provides 20 PCIe 3.0 lanes and a DMI 3.0 interface. About the only thing X99 has an edge is 10 SATA ports v.s. 6.
You seem to have missed that I'm referring to them as a platform and the PCH. It's called context. X99 as a platform still stomps Z170 in raw bandwidth. Bandwidth that can be used to add whatever devices you see fit and manage what you have or don't have in your system at your disposal. You also completely missed the fact that those lanes from the PCH are still being switched from the CPU. This is why you don't have control over those particular lanes, only what they deem is safe (AKA that M.2). Hence why 40 or even 28 is better than 20, which is realistically 16.

In one paragraph you acknowleged Z170 PCH is PCIe 3.0 capable yet you said the M.2 slot wired to it was only 2.0 x4 later. Have you realised you contradicted yourself? M.2 ports on all Z170 board are rated for 3.0 x4 32Gb/s. Can 2.0 x4 provide 32Gb/s bandwidth?
I have not. I acknowledged that it's capable of Gen 3, but has nothing to do with what the manufacturer has allocated to the M.2 slot. Those 4 lanes on your beloved mainstream platform are being split and switched to the features as they deem necessary and my point about the M.2 is that by default it's still given Gen 2 4x. The ports may be rated for 32Gb/s but the only manufacturer that advertises it would be ASRock with their Ultra M.2, which has been that way since Z97. In this case of the Asus Z170-A (yes not the board you linked) they even specify their M.2 bandwidth is shared from PCI-E 16_3, that won't be coming from the PCH. SATA-E, M.2, and PCI-E 16_3 all sharing the same bandwidth.....I guess that's a feature. Yet with 40 lanes I can cram 3 GPUs running at 8x Gen 3, couple of M.2 Gen 3 4x and not worry about splitting while having 8 lanes to spare. All this while still having the basic features associated with the board like USB3, more SATA, ect.

You used that bottom of the pack Z170 board as the sole example to prove that SLI is only possible with X99? Take a look at this: http://www.newegg.com/Product/Produ...=asus_z170_pro_gaming-_-13-132-567R-_-Product . BTW this is only a mid-end board.
That was just the first board that popped up with Asus Z170 (since my search parameters are low to high), and again completely right over your head with lane usage. I'll spell it out to you, 16x going to the top slot, technically 4 lanes split into 20 Gen 3 from the PCH, yet still being limited with shared bandwidth from the 3rd slot. When we look at the board you linked we still have the same thing going on. The board supports SLI but that's only splitting the main lanes for the main 2 PCI-E ports into 8/8 or a single 16. The 3rd slot sharing bandwidth with the M.2. Only one or the other will work. Still doesn't change the fact that it's mainstream and still extremely limited.
I've digressed long enough. I don't intend on continuing an argument that holds no fruit to the topic at hand anyways. One can never forget the fame that TPU holds with the never ending arguments in the News section and I don't plan to be a part of this one any longer. Have a good one mate.
 
Better binned might explain the difference between an 8 core at 105 and another at 140, but 55w 8 core vs 140w 6 core seems to indicate something far more than binning. I agree with Nelson Ng these seem like rebadges. Broadwell is no where near that inefficient.

now haswell on the other hand, that might make more sense. like my 4790k 4 cores at 88w vs 8 at 140 but with lower clocks that makes sense. The 6 core simply being the low bin of the bunch and the 10 core being the high bin.

The error in your analysis is you erroneously assumed and equated TDP to actual power usage. TDP does not automatically mean power consumption. You can have a graphics card/CPU use less power than TDP, similar power to TDP or more power than TDP rating. This is because TDP is a guidance for the cooling system design, it is not a term that should be used interchangeably with power usage. Yet, to this day, many PC 'enthusiasts' use the term incorrectly to imply power usage:

Core i7 3930K = 130W TDP
Core i7 4820K = 130W TDP
Core i7 4930K = 130W TDP
Core i7 4960K = 130W TDP

Real world system power usage once all of the threads are loaded:
http://www.xbitlabs.com/images/cpu/core-i7-4960x-4930k-4820k/Charts/power-2.png

"The new Ivy Bridge-E CPUs aren’t very economical at full loads, yet they are better than their Sandy Bridge-E counterparts. The Core i7-4960X needs 85 watts less than the Core i7-3970 whereas the Core i7-4930K needs 68 watts less than the Core i7-3930K. Thus, the new 22nm six-core CPUs from Intel offer higher performance per watt than their predecessors, largely due to the fact that the Ivy Bridge-E CPUs work at lower voltage."
http://www.xbitlabs.com/articles/cpu/display/core-i7-4960x-4930k-4820k_8.html#sect0

It was also pointed out to you that there is an exponential relationship that exists between voltage and power usage. Higher ASIC clock speeds tend to require higher voltages:
http://photocdn.sohu.com/20150714/Img416724389.png

Therefore, based on the actual understanding of TDP and the relationship between higher voltages needed to achieve higher frequencies on all cores and their corresponding impact on power usage, your comparison to a low-clocked 55W Xeon is not a relevant method to discredit BW-E's efficiency.
 
I've digressed long enough. I don't intend on continuing an argument that holds no fruit to the topic at hand anyways. One can never forget the fame that TPU holds with the never ending arguments in the News section and I don't plan to be a part of this one any longer. Have a good one mate.

Well, as you seem to have realised what was wrong in your original argument and have changed wording in your last post, let's end this conversation and go back to the original topic. As another member said earlier, get your facts straight before start arguing with misleading information.
 
The error in your analysis is you erroneously assumed and equated TDP to actual power usage. TDP does not automatically mean power consumption. You can have a graphics card/CPU use less power than TDP, similar power to TDP or more power than TDP rating. This is because TDP is a guidance for the cooling system design, it is not a term that should be used interchangeably with power usage. Yet, to this day, many PC 'enthusiasts' use the term incorrectly to imply power usage:

Core i7 3930K = 130W TDP
Core i7 4820K = 130W TDP
Core i7 4930K = 130W TDP
Core i7 4960K = 130W TDP

Real world system power usage once all of the threads are loaded:
http://www.xbitlabs.com/images/cpu/core-i7-4960x-4930k-4820k/Charts/power-2.png

"The new Ivy Bridge-E CPUs aren’t very economical at full loads, yet they are better than their Sandy Bridge-E counterparts. The Core i7-4960X needs 85 watts less than the Core i7-3970 whereas the Core i7-4930K needs 68 watts less than the Core i7-3930K. Thus, the new 22nm six-core CPUs from Intel offer higher performance per watt than their predecessors, largely due to the fact that the Ivy Bridge-E CPUs work at lower voltage."
http://www.xbitlabs.com/articles/cpu/display/core-i7-4960x-4930k-4820k_8.html#sect0

It was also pointed out to you that there is an exponential relationship that exists between voltage and power usage. Higher ASIC clock speeds tend to require higher voltages:
http://photocdn.sohu.com/20150714/Img416724389.png

Therefore, based on the actual understanding of TDP and the relationship between higher voltages needed to achieve higher frequencies on all cores and their corresponding impact on power usage, your comparison to a low-clocked 55W Xeon is not a relevant method to discredit BW-E's efficiency.

Heat is energy's first output, when energy is consumed heat follows. So whether tdp is stating heat or power matters little when talking about efficiency. Also without an outside heat source it would be impossible for the thermal rating to exceed the energy draw, entropy being what it is. Though it certainly is possible and typical for the energy draw to exceed the thermal rating. Especially when you factor in PSU's and efficiency

Your first graph is the system power consumption, not that of the cpu alone. And it does show ivy bridge E being more efficient than sandy bridge E... in power draw. But Intel isn't going to throw out a cpu tdp without reason so we can assume if they say the thermal load is the same, it is. So even despite operating at a lower typical power draw ivy bridge E exudes heat at a higher rate making it just as efficient heat wise as sandy bridge E even despite the power savings. But I guess that's where the comparison fails a datacenter is concerned with both power draw and heat, whereas a home user is only really concerned with power draw and noise, not necessarily heat.

The 2011 v3 socket is the same for both xeon's and I7's alike. They are both based on the same architecture (haswell E) and they run at similar voltages. The base clocks tend to be reduced on the Xeons but the turbo clocks are similar. The 55w 8 core is the L version and voltage certainly does scale with both power and heat, but remember that's per core. So it scales with the number of cores as well. And the 145W 18 core is not of the low voltage type.

no matter how you slice it these are not efficient at all and certainly not in line with what we've been seeing in the graphics realm.

though possibly the best answer comes in the form of the paragraph you left out of the Xbit X79 comparison



By the way, the quad-core LGA2011 processor Core i7-4820K is close to the Haswell-based Core i7-4770K in terms of power consumption. This is rather an illustration of the increased power requirements of the LGA1150 platform than of any improvements in the Ivy Bridge-E design. After all, the new Ivy Bridge-E CPUs, even though reduce the power consumption of the LGA2011 platform, do not make it economical. You should avoid this platform if you want an energy-efficient computer.
 
Well, as you seem to have realised what was wrong in your original argument and have changed wording in your last post, let's end this conversation and go back to the original topic. As another member said earlier, get your facts straight before start arguing with misleading information.
My facts are straight, that's exactly why I didn't want to argue. 40 lanes are better than 20(16) no matter what angle you go at it. The chip (Z107) itself may be better but from a performance and usability standpoint for the consumer the lanes will come in handy on the X99 (platform). DMI 3 has the bandwidth of roughly 4 lanes going to the CPU, so not much good that really does besides load up on M.2. If you want to do that then more than likely you have the cash for the HEDT anyways. That was the whole point of the matter. You can do more with more lanes from the CPU than with lanes being dictated by your PCH. Feel me now?
 
My facts are straight, that's exactly why I didn't want to argue. 40 lanes are better than 20(16) no matter what angle you go at it. The chip (Z107) itself may be better but from a performance and usability standpoint for the consumer the lanes will come in handy on the X99 (platform). DMI 3 has the bandwidth of roughly 4 lanes going to the CPU, so not much good that really does besides load up on M.2. If you want to do that then more than likely you have the cash for the HEDT anyways. That was the whole point of the matter. You can do more with more lanes from the CPU than with lanes being dictated by your PCH. Feel me now?


The other members and I were only trying to point out the misinformation (M.2 slot on Z170 were only 2.0 x4) in your original post.

Talking about the overall platform, yes Haswell-E + X99 will give you more lanes if you opt for 5930K/5960X (it's a lot more expensive, so you'd expect that anyway). However in terms of how these lanes are utilised, it is actuallly controlled (or dictated in your words) by motherboard manufacturers. This applies to both X99 and Z170. You probably think those 20 lanes on Z170 are fixed to certain functionalities and cannot be easily changed. Well this is not true. Intel introduced this thing called Flex I/O starting from 9 series chipsets. It allows the PCIe lanes in the PCH to be grouped and assigned to various functionalities (SATA, M2, SATAe, U2, USB, LAN, etc.) based on motherboard manufacturers' own designs (best example Asus Maximus VIII Hero vs Maximus VIII Hero Alpha announced in the news today). So motherboard manufacturers could give you more than 3.0 4x from PCH if they wish.

Those 40 lanes from Haswell-E are usually wired to 3 or 4 PCIe x16 slots plus a M2 slot. You may think you can utilise all of them. Well this is only true if you have multiple GPUs. For a graphic card + a few PCIe SSDs, some of those lanes become wasted (e.g. puting a x4 SSD in a x8 slot). On top of that you can only insert PCIe SSD in the M.2 slot from the CPU whereas Z170 slot supports both PCIe/SATA (not that many people want to actually use SATA M.2 SSDs). Intel RST is another missing feature if you use CPU PCIe lanes for storage devices. So for overall flexibility these two platforms are about the same with X99 having an edge on multi-GPU and Z170 winning on storage options (with 5820K having only 28 lanes, the scale tips towards Z170 slightly).
 
I've generally been a bit of a Linux power user, but I don't even know what I would do with 16 or 20 threads of execution available. Sometimes I have 3 or 4 vbox open, but they are interactive sessions so I can only use one at a time anyway. My existing project compile times aren't long enough to worry about. That's a lot of fire power ...
 
I know I could easily use it for rendering I do, but I'd need 32-64GB of RAM to really make that many threads useful. I'm already very limited on my 4c/4t i5 with 8GB RAM.
 
I considered going on some second hand X99... But such low freqs at 140W TDP... Broadwell is a disaster for overclocking it seems... Something is really wrong.

Oh well... My old x79 is still plenty enough... it just might kick the bucket some day...

@xorbe

I sometimes put together and patch some cyanogen roms for my personal phones if I have some free time... You know... The compiling takes everything you throw at it... It still lasts around 30mins till 1.5h, depending on platform and toolchains...
 
Sounds a little dubious to me that this supposed 6950X can only afford 2MB L3 per core. It's uncharacteristic of the HEDT lineage to have this, especially in such an expensive flagship. The 3930K and 4930K were only 2MB/core but they also weren't $1000 flagships.
My 3820 was 300 dollars and has 10MB L3; 2.5MB per core, the same as the EE chips. The L3 is also shared between all of the cores so, depending on the workload, the reduced amount "per core" probably won't be a problem. L3 is accessible to every core, it's not like L2 where each core has it's own. So if one core was working with some data that another core is eventually going to work with, there is a good bet that it won't need to go further than L3 to fetch it. 20MB of L3 is a lot of cache for a single CPU. There is a point of diminishing returns with hit rates though. There comes a point where you can't just throw more cache at a problem and expect a speed-up.
 
tbh i'm happy with my 5960x,

as for board manufacturers making more, i really can't see it happening,

why ?

well my new motherboard (Gigabyte GA-X99-SOC-CHAMPION) has stopped being sold (ye ye i know it's only 1 out of loads) but still even the successor is no longer available ,

i've no idea how popular or unpopular the x99 platform has been, but it seems as though it was for fairly rich people

why intel thought bringing out another lot of 2011-3 processors is a good idea i'll never know :confused:
 
I do have a genuine need for true 8 core chip (not 4 core + HT) for Blender rendering but I dislike the fact that the chips are 140w. For about 10 years now, I've been a devoted mini-itx builder using CPU which TDP went no higher than 65w.

I might try these if they somehow go down to 95w-ish (No xeon for me, I need the onboard GPU).
 
I do have a genuine need for true 8 core chip (not 4 core + HT) for Blender rendering but I dislike the fact that the chips are 140w. For about 10 years now, I've been a devoted mini-itx builder using CPU which TDP went no higher than 65w.

I might try these if they somehow go down to 95w-ish (No xeon for me, I need the onboard GPU).

It's not like they don't already have a solution for this. Pack a i7-5960X or whatever the Broadwell-E equivalent would be, X99E-ITX and U12DXi4 into an ITX case. If you need something truly SFF then just use a 120mm AIO.

It's just going to be expensive, that's all. Also, if you need HD Graphics, you're looking in the wrong place. Never has any HEDT CPU had integrated graphics. TDP is also not coming down. HEDT has always been 120-140W.
 
It's not like they don't already have a solution for this. Pack a i7-5960X or whatever the Broadwell-E equivalent would be, X99E-ITX and U12DXi4 into an ITX case. If you need something truly SFF then just use a 120mm AIO.

It's just going to be expensive, that's all. Also, if you need HD Graphics, you're looking in the wrong place. Never has any HEDT CPU had integrated graphics. TDP is also not coming down. HEDT has always been 120-140W.

Not worth the money, I am afraid. I do rendering on my quad core laptop (i7 4700mq) if I am not in rush and leave it on. A high quality render takes somewhere between 2 hours to 16 hours and doing that on my main rig isn't very convenient.

There is also an option to use CUDA rendering with my 760 but CUDA rendering has its own issues. I do find that 760 is faster than my i7-4770s but GPU can't render all shaders in Blender and the whole rig becomes very unresponsive when using GPU.

The reason I need the igpu is that I install rendering rigs in 2U rackmount. Server boards will come with some sort of 3rd party gpu which is just powerful enough to power a monitor but I want something more than that and HD graphics is perfect for it.
 
I do have a genuine need for true 8 core chip (not 4 core + HT) for Blender rendering but I dislike the fact that the chips are 140w. For about 10 years now, I've been a devoted mini-itx builder using CPU which TDP went no higher than 65w.

I might try these if they somehow go down to 95w-ish (No xeon for me, I need the onboard GPU).

65W / 20 threads -> 3.25W per thread ... you're asking for mobile power stats!
 
Back
Top