Tuesday, August 5th 2014

Intel Debuts its 6th Generation Core Processor Family and Z170 Express Chipset

Intel announced its first 6th generation Core processors, codenamed "Skylake." Built on Intel's swanky new 14 nanometer silicon fab process, and in the new LGA1151 package, these processors bring DDR4 memory to the mainstream, and offer IPC improvements over the previous-generation Core "Haswell" and "Broadwell" processors. Making its debut at Gamescom, Intel is starting its lineup off with two chips that are predominantly targeted at the DIY gaming PC crowd, the Core i7-6700K and the Core i5-6600K quad-core processors. More models in the series will be launched towards the end of this month. The company also announced the Z170 Express chipset.

The Core i7-6700K features a nominal clock speed of 4.00 GHz, with a Turbo Boost frequency of 4.20 GHz. It features 8 MB of L3 cache, and HyperThreading. Its integrated Intel HD 530 graphics ticks at 350 MHz, with 1200 MHz Boost. The Core i5-6600K, on the other hand, features clock speeds of 3.50 GHz, with 3.90 GHz Turbo Boost. It features 6 MB of L3 cache, and lacks HyperThreading. It features the same integrated graphics solution as its bigger sibling. The TDP of both chips are rated at 91W. Both chips feature integrated memory controllers with support for DDR3L-1600 and DDR4-2133. The Core i7-6700K is priced around $350, and the i5-6600K around $243, in 1000-unit tray quantities. The retail packages of both chips will lack a stock cooling solution. The LGA1151 cooler mount will be identical to that of the outgoing LGA1150, so you shouldn't have any problems using your older cooler.
The first wave of motherboards driving the two processors will be based on Intel Z170 Express chipset, targeted at overclockers and gamers. Both chips feature unlocked base-clock multipliers, and this chipset lets you take advantage of that. With the 100-series chipset, Intel increased the DMI chipset bus bandwidth to 64 Gbps (32 Gbps per direction), which should help with the new generation of high-bandwidth storage devices, such as M.2/PCIe SSDs. The chipset features native support for the NVMe protocol.

Interestingly, Intel is keeping info on what it changed with the "Skylake" microarchitecture over "Broadwell," under the wraps until the 18th of August, 2015. It will put out these tech details at IDF-2015. The company wants to take advantage of Gamescom to reach out to the DIY crowd with two three products targeted specifically at them.
Add your own comment

25 Comments on Intel Debuts its 6th Generation Core Processor Family and Z170 Express Chipset

#1
RejZoR
Darn, and I was hoping to see some more info now. I guess I'll have to postpone decision to replace whole system till late August/early September...
Posted on Reply
#2
Caring1
Why is Turbo Boost so restricted on the i7?
If anything that should clock higher for enthusiasts.
Posted on Reply
#3
birdie
No press release, no official information, nothing.

Where did you get this information from? I mean I know it had to be debuted today, but I want some hard official data.
Posted on Reply
#4
horik
I can wait a few weeks to see price/performance for this CPUs, maybe is time for an upgrade.
Posted on Reply
#5
HumanSmoke
birdie said:
No press release, no official information, nothing.
Where did you get this information from? I mean I know it had to be debuted today, but I want some hard official data.
The press deck is doing the rounds now. You probably won't see the SKUs, chipsets, and product briefs listed on Intel's ARK until tomorrow.
Posted on Reply
#6
birdie
HumanSmoke said:
The press deck is doing the rounds now. You probably won't see the SKUs, chipsets, and product briefs listed on Intel's ARK until tomorrow.
Ah, I get it: embargo until August 5, 5:00am PDT which is 2 hours 10 minutes from now. So, someone is actually committing a crime here ;-)
Posted on Reply
#7
Chaitanya
eagerly waiting for the review of the new Cpus. I might finally replace my old Amd based systems with a new skylake based Pc
Posted on Reply
#8
Uplink10
No native USB 3.1 support? Interesting.

USB is making its way all too slowly, you can still buy PCs which have too much USB 2.0 ports and some PCs (notebooks) actually have two USB 2.0 ports connected to a USB 3.0 hub and while I do appreciate that both ports will run at full USB 2.0 speed instead of just half the speed of USB 3.0 when both are active I would rather have two USB 3.0 ports.
Posted on Reply
#9
The Quim Reaper
Go home Intel, you're drunk on the beverage known as 'no competition'.
Posted on Reply
#11
FordGT90Concept
"I go fast!1!11!1!"
birdie said:
No press release, no official information, nothing.

Where did you get this information from? I mean I know it had to be debuted today, but I want some hard official data.
NDA lifted.


I really wish they'd announce availability of 6700K and 6700. If it's going to be another month...
Posted on Reply
#12
crsh1976
FordGT90Concept said:
NDA lifted.


I really wish they'd announce availability of 6700K and 6700. If it's going to be another month...
It's a soft launch, this is only for the K chips - the full line-up for desktop, but also laptops (possibly even overclockable mobile chips) is on August 18th.
Posted on Reply
#13
AlexTRoopeR
HD 530 has a maximum frequency of 1150 MHz !!!!!
Posted on Reply
#14
Aquinus
Resident Wat-man
Am I the only person who noticed DMI 3.0 (not DMI 2.0 like older motherboards,) and how the PCH has 20 PCI-e 3.0 lanes in addition to the 16 from the CPU?
Posted on Reply
#15
HumanSmoke
Aquinus said:
Am I the one person who noticed DMI 3.0 (not DMI 2.0 like older motherboards,) and how the PCH has 20 PCI-e 3.0 lanes in addition to the 16 from the CPU?
Nice to see someone else saw the bandwidth and connectivity options for the new platform.
Anandtech has a pretty good overview of the storage options afforded by DMI 3.0 ( M.2's and SATA-Express in RAID config's for example). Some of the workstation orientated boards should be quite interesting.
Posted on Reply
#16
Static~Charge
Caring1 said:
Why is Turbo Boost so restricted on the i7?
If anything that should clock higher for enthusiasts.
Thermal issues, likely. The HyperThreading circuitry produces heat, and it gets overclocked along with the rest of the CPU.
Posted on Reply
#18
cadaveca
My name is Dave
FordGT90Concept said:
Newegg finally lists the 6700K but it is "coming soon." Are there supply problems?

Also, will the non-K models have a HSF?
No supply problems. Intel timed publicity launch with Gamescom and Win10.

As far as I heard (2nd hand) the non-K chips will have HSF in box. Only "enthusiasts" in general buy "K" SKU, and like 95% buy aftermarket cooling also. Save the trees, yo.
Posted on Reply
#19
FordGT90Concept
"I go fast!1!11!1!"
I guess that impacts whether I go 6700 or 6700K then. I think I would want to use a massive HSF because massive HSFs are awesome. At the same time, Intel probably isn't cutting the price by removing the HSF so...they're lining their pockets. The lower stock clock of the 6700 isn't appealing either...
Posted on Reply
#20
nem
AMD 8-core Intel or 4 for the next generation

This will be the most important issue in the coming months, we face a dilemma and try to resolve that I will explain with a language and family level, with the minimum amount of jargon possible.

Note: This article was written for Intel i5 3570 and i7 3770 (two 4-core) and the AMD 8350 processors has 8 real cores, so when generalizing by brand I'm thinking one of these standard processors in the current high-end PC.

There are legends that have not used their AMD 8-core, this is not entirely true, there are now many games that do not use the 8-core AMD but some if you do and the results are very good to AMD 8 cores. If someday the gaming industry decided that it is best that the games take advantage of the 8-core, AMD would then lead to the i5 and i7 4-core.

So why an Intel 4-core worth equal or more than AMD 8?

There are many other programs that the 8-core (design software, 3d render, editing and video capture, etc.) are used.

The price difference is not about power or performance multicore, the real difference lies in energy efficiency. Amd is less efficient and less cold than subtly Intel, this may lead to larger sinks and more revolutions fans subtly making less silent.

The value of the cores

Intel provides more power per core, plus it does to lower energy costs, and the current standard in games are 4 cores so that today in most games Intel 4 real cores is subtly superior in those games only advantage 4 cores or less, I mean those close games this generation (Rift, Battlefield 3, Guild Wars 2, etc).

There is a rule that is now valid and must not forget: An actual core Intel pays a little more than a real core of AMD, but a real core AMD surrenders more than a virtual core (thread or yarn) of an Intel i7 . Therefore an Intel i7 working with 8 virtual cores yield less than 8 cores of AMD 8350 in a program or game that takes advantage of 8 cores.

Most affordable i7 (€ 200-300) have 4 real cores but with hyperthreading technology can emulate 8-core, but they are still 8 virtual cores, this technology makes them good working on programs and games that use more than 4 cores. Hyperthreading is one of the main differences between an i5 quad core and i7 quad-core, if not why would yield almost the same, in fact perform like in most games that use 4 cores or less (so over 99 % of the current games)

So Intel i7 or AMD 8350?

It is a difficult choice having processors as 8350 by nearly € 100 less and a similar power i7 render field. If we compare a car with the Intel i7 would be a quiet and elegant family Mercedes 300 horses and the AMD 8350 would be a special edition Renault Clio 300hp in mountain and secondary roads can win agility and nerve to Mercedes that goes smooth and cold at Freeway vibrate or make noise.

What is the best processor to play if be overclocked (without trucarlo without uploading of turns)?

Who has a 4-core processor with higher performance per core is that which has the best processor to play, today the best processor to play with nearly 40% more performance than the best AMD (in 99.9% of games) is this Intel i5-4670 Battlefield 4 is that 00.1% where it has invested heavily by AMD is scheduled for distributing the burden on more than 4 cores (Eye I said distributing it, not taking 6 or 8 cores), this game considered a game sponsored by AMD.

Hopefully AMD will recover and catch up on their core technology and continues sliding down better because if there will be competition for Intel and absence of competition is bad news for consumers. For now abandoned for years AMD processor development of more than 4 cores to improve performance per core of its Quad-Core reaffirming the standard of the 4 cores in the video games industry presumably for many years.

More information about the processor cores: How to know how many actual cores has a processor

What will happen in the next generation with the nuclei and games?

During the early years of the next generation of games (from near the exit of PS4 and Xbox One time) games will continue to use 4 cores and at times we will see a game that takes advantage of eight cores of AMD actual or 8 virtual cores i7. From that moment we can not know what will happen but if we analyze the possibilities.

Everything indicates that like what happened with the quad cores (4 cores), 8 cores not be standardized until the second half or end of the generation after his departure to the domestic market, I mean that based on what happened with the quad cores we can deduce until well after the second half of the next generation not We should be concerned about the number of cores.

If there will be a game that is better in the 8 cores but is perfectly playable with 4 cores.

The market rules

There are many people who are entering the world of PC gamer by buying a i5 (4 cores), an i7 (4 cores) or AMD APUs 4-core, no mark of video game industry canafford to exclude this sector starting now expect their machines last them many years. For this very reason we can deduce easily than who starts today with a 4-core processor can be sure you have PC for many years and well balanced will be able to enjoy the next generation of games perfectly.

What about consoles and 8-core processors?

Many people think that the console market has to lead the way on the PC. The new hardware consoles appear very similar to a PC, in fact it is a PC with a processor from AMD 8-core much like the current 8350 but this will not force the PC to upgrade to 8 cores.

Do not forget that these consoles They can do many things while playing, in fact it is one of the Main features of this generation machines from Sony and Microsoft, the console will process the background while you play. All these processes running simultaneously can only mean one thing to play 4 cores and 4 for multitasking (Playing in the left half of the screen while a relative in the right half watch the news, surf Internet, check Facebook, makes a video conference, etc.)

As in PC we will find some exclusive technology that maximizes the capabilities of the machine (Ps3 as we saw with the Gran Turismo).
Posted on Reply
#21
HumanSmoke
nem said:
So why an Intel 4-core worth equal or more than AMD 8? etc etc I guess
CPUs don't operate in a vacuum. The FX (700/800/900 series) platforms haven't had any kind of update for years. Just look down the feature set lists of the platforms (and platform issues/errata), and form factor options. It's a pretty hard sell to convince people to switch onto a system almost half a decade old that has no upgrade path, with a successor architecture being talked up the company - a company themselves that have already consigned CMT to history.
Posted on Reply
#22
FordGT90Concept
"I go fast!1!11!1!"
nem said:
...AMD 8350 processors has 8 real cores...
No, it doesn't because 4 of those are missing the arithmetic logic unit. They share resources not unlike Hyperthreading but AMD's implementation has more components that aren't shared. At the end of the day, 8350 will only process arithmetic like a quad-core which means it is not an octo-core. An actual octo-core doesn't choke like that.
Posted on Reply
#23
Aquinus
Resident Wat-man
FordGT90Concept said:
No, it doesn't because 4 of those are missing the arithmetic logic unit. They share resources not unlike Hyperthreading but AMD's implementation has more components that aren't shared. At the end of the day, 8350 will only process arithmetic like a quad-core which means it is not an octo-core. An actual octo-core doesn't choke like that.
Jesus, not this discussion again. I think some clarification needs to be made on why AMD's core is a real core (in more respects, as opposed to less.) First of all, Ford, you use the wrong term. 8-core AMD CPUs do have an ALU in every core. It does, however, share a floating-point unit. AMD CPUs can do 8-threads worth of integer operations all day long with minimal loss in performance per thread. This changes when you start doing floating point math because each module shares a FMA FPU that's 256-bits wide.

In the early days of Bulldozer, there are were a lot of issues with not having refined the design therefore cache latencies and hit/miss ratios were bad, decoding performance was poor (too few decoders for two cores,) and of course, the floating point unit thing. Since then AMD has improved the decoder and cache latency and hit/miss ratio issues and the FPU issue really isn't one because if someone is really using more than 4-thread of FP math, you probably would see very tangible benefits from utilizing GPGPU, which is what AMD is been trying to push for.

With that said, most operations that occur on a CPU is integer math. As a result, I would say that AMD has 8 real cores because they have dedicated hardware for running 8 threads (even if some of it is shared.) This only doesn't hold true when you're talking about the FPU or the L2 cache (Intel Core 2 Duos shared L2 cache between two cores, so I don't count that as one, really.)
nem said:
There is a rule that is now valid and must not forget: An actual core Intel pays a little more than a real core of AMD, but a real core AMD surrenders more than a virtual core (thread or yarn) of an Intel i7 . Therefore an Intel i7 working with 8 virtual cores yield less than 8 cores of AMD 8350 in a program or game that takes advantage of 8 cores.
Depending on the workload. AMD will throw Intel to the ground on a fully parallel application doing integer math. If you're doing floating point math, the Intel CPU will walk right over the AMD chip.

The problem is that people treat the multi-threading problem as if it were a consistent issue from task to task, it is not. Depending on what you're doing, multi-threading very well may not even yield tangible benefits. If you have an application where every operations relies on the output of the last, multi-threading won't get you anything because every instruction will rely on the output of the next so you have no chance to run in parallel because of unmet data dependencies. In fact such an application would run slower because of all the overhead and zero gain. Games are very complex and not only do game devs have to ask themselves how would we make this multi-threaded, but how can we do this without making performance worse. More often than not, adding overhead to do multi-threading will slow down some parts of an application from a latency perspective but you might be able to double or triple your compute capability and a lot of times games demand low latency which adds complexity to the problem.

TL;DR: Multi-threaded applications have a lot of factors that go into figuring out if something has parts that can be run in parallel and others that can't. Some tasks simply can't utilize multi-threading efficiency due to the workload. I just wanted to make that 100% clear. As a software developer who writes multi-threaded applications, I have some idea of what the limitations of parallel compute can offer and what their advantages are.

For example, at work I've been designing an integration system that uses priority queues to process database changes in parallel and to make API calls which also get put on a priority queue and also get executed in tandem. While it can run up to 40 threads at once (possibly more, it's just on a 20c/40t box right now,) it bottlenecks on the database connection, so more than 2-4 cores is never realized under normal operations. If I implemented a system that managed several database connections at once and divvied them out from a pool and managed them, I'm sure I could increase performance by 2-4 times what it is now. That would take time, effort, system resources, and would impact latency. In this case, adding latency is fine but, implementing it is a different story because that takes time and most businesses understand that time = money. However, it happens to be the case that my situation can be highly parallelized, games are a bit more difficult because everything isn't so clear-cut.
Posted on Reply
#24
FordGT90Concept
"I go fast!1!11!1!"
Aquinus said:
Jesus, not this discussion again. I think some clarification needs to be made on why AMD's core is a real core (in more respects, as opposed to less.) First of all, Ford, you use the wrong term. 8-core AMD CPUs do have an ALU in every core. It does, however, share a floating-point unit. AMD CPUs can do 8-threads worth of integer operations all day long with minimal loss in performance per thread. This changes when you start doing floating point math because each module shares a FMA FPU that's 256-bits wide.
I know why I keep screwing that up. Because it makes more sense to share ALUs and not FPUs. FPU performance tends to be lacking in processors where ALU performance does not, so it would make sense that AMD would not share FPUs. I am consistently wrong because AMD is consistently wrong. Instead of focusing on improving areas where performance was weak, they went with what was cheap. You're probably facepalming me and pass that facepalm to AMD. :laugh:

Aquinus said:
With that said, most operations that occur on a CPU is integer math. As a result, I would say that AMD has 8 real cores because they have dedicated hardware for running 8 threads (even if some of it is shared.) This only doesn't hold true when you're talking about the FPU or the L2 cache (Intel Core 2 Duos shared L2 cache between two cores, so I don't count that as one, really.)
Most is integer because most involve memory pointers and addresses but that doesn't really matter because a CPU is always only as good as its weakest link. Games rely heavily on floating-point and some are even using doubles (64-bit floating point) like Minecraft for position data. That's not something that can be farmed out to the GPU because it is constantly being checked and updated and a hell of a lot of variables are based off of it. There's also model animations that are extremely FPU heavy. If a game is running on two virtual cores and they both feed to the same FPU, that FPU can understandably cause the game to slow down.

Another example: I have BOINC running all of the time which is extremely FPU intensive. If a BOINC task were running at high priority on one virtual cores and something else (like a game) was running at normal priority on the other virtual core, that other program will have virtually no FPU work computed. If that software was waiting for an FPU computation to finish (like a game), it would lock up. If that same scenario were run on an actual octo-core, there would be little to no impact (software would not lock up).

So I'd argue they are not octo-cores. They are quad-cores with semi-symmetrical multithreading. If you look at Symmetrical multithreading in Nehalem and newer, it's impossible to tell what cores are virtual and which are not by performance. All virtual cores get 50%-100% (assuming the other virtual core is 0-50%) of their underlying physical core's time. There's no way for one virtual core to choke a physical core.
Posted on Reply
#25
Aquinus
Resident Wat-man
FordGT90Concept said:
I know why I keep screwing that up. Because it makes more sense to share ALUs and not FPUs. FPU performance tends to be lacking in processors where ALU performance does not, so it would make sense that AMD would not share FPUs. I am consistently wrong because AMD is consistently wrong. Instead of focusing on improving areas where performance was weak, they went with what was cheap. You're probably facepalming me and pass that facepalm to AMD. :laugh:
FPU performance is lacking because doing floating point math is by definition, harder to do than integer math and requires more circuity and more hardware. AMD shared the FPU because integer cores tend to get used more often. It's really that simple.
FordGT90Concept said:
Most is integer because most involve memory pointers and addresses but that doesn't really matter because a CPU is always only as good as its weakest link.
Characters are integers too, so string operations are done in the ALU. I think you're underestimating how much is done on integer cores. In fact you don't even need an FPU because you can do floating point math on an integer ALU.
FordGT90Concept said:
Games rely heavily on floating-point and some are even using doubles (64-bit floating point) like Minecraft for position data. That's not something that can be farmed out to the GPU because it is constantly being checked and updated and a hell of a lot of variables are based off of it.
Minecraft isn't using more than 4 threads to calculate position information and I'm willing to bet it uses single precision not doubles for such calculations because such exactness isn't required. Most games have little use for double-precision as it gets you very little and takes more memory and time to calculate.
FordGT90Concept said:
There's also model animations that are extremely FPU heavy.
Like AutoCAD? You mean applications that in many cases are already accelerated by GPUs? I should note that GPUs also don't strictly do floating point math, they do integer math too. Just because it runs on the GPU doesn't mean it does floating point math. Just wanted to make that clear too because many scientific applications might prefer fixed point numbers over floats for the sake of maintaining exact precision.
FordGT90Concept said:
If a game is running on two virtual cores and they both feed to the same FPU, that FPU can understandably cause the game to slow down.
I seriously doubt that every instruction is going to be one floating point operation after another. If it's doing that kind of mass processing, a GPU would be useful. Once again, I think this is a testament to how much you're underestimating how much integer math is done in most applications, even 3D ones.
FordGT90Concept said:
Another example: I have BOINC running all of the time which is extremely FPU intensive. If a BOINC task were running at high priority on one virtual cores and something else (like a game) was running at normal priority on the other virtual core, that other program will have virtually no FPU work computed. If that software was waiting for an FPU computation to finish (like a game), it would lock up. If that same scenario were run on an actual octo-core, there would be little to no impact (software would not lock up).
That's a fuddish statement and I will explain why. BOINC (in particular WCG,) provides workloads of all different kinds. So while one WU might have a lot of floating point math, another one might use purely integer math because sometimes the error introduced when using floats is unacceptable so fixed point math (with integers,) makes more sense. So to me, this statement just makes me want to facepalm because there is absolutely nothing that indicate that all projects through BOINC are strictly floating point in nature. That's an over-generalization.
FordGT90Concept said:
So I'd argue they are not octo-cores. They are quad-cores with semi-symmetrical multithreading.
Then we agree to disagree because in reality, most operations being carried out are integer in nature so, unless you're crunching floating point numbers, it won't be a big deal. They have more dedicated parts than shared parts which is why I think your statement is wrong. They're not slower, they're shared and quite frankly if you have an 8c AMD CPU, you still can do 4 floating point ops at once which is no less impressive than the Phenom II which also had 4 FPUs, the difference is that you only had 4 integer ALUs instead of 8.

If your measure for "cores" is by how many FPUs it has, then you've already forgotten what the purpose of a CPU is.
Posted on Reply
Add your own comment