Monday, August 11th 2014

Intel Details Newest Microarchitecture and 14 Nanometer Manufacturing Process

Intel today disclosed details of its newest microarchitecture that is optimized with Intel's industry-leading 14 nm manufacturing process. Together these technologies will provide high-performance and low-power capabilities to serve a broad array of computing needs and products from the infrastructure of cloud computing and the Internet of Things to personal and mobile computing.

"Intel's integrated model - the combination of our design expertise with the best manufacturing process - makes it possible to deliver better performance and lower power to our customers and to consumers," said Rani Borkar, Intel vice president and general manager of product development. "This new microarchitecture is more than a remarkable technical achievement. It is a demonstration of the importance of our outside-in design philosophy that matches our design to customer requirements."
"Intel's 14 nanometer technology uses second-generation Tri-gate transistors to deliver industry-leading performance, power, density and cost per transistor," said Mark Bohr, Intel senior fellow, Technology and Manufacturing Group, and director, Process Architecture and Integration. "Intel's investments and commitment to Moore's law is at the heart of what our teams have been able to accomplish with this new process."

Key Points:
  • Intel disclosed details of the microarchitecture of the Intel Core M processor, the first product to be manufactured using 14 nm.
  • The combination of the new microarchitecture and manufacturing process will usher in a wave of innovation in new form factors, experiences and systems that are thinner and run silent and cool.
  • Intel architects and chip designers have achieved greater than two times reduction in the thermal design point when compared to a previous generation of processor while providing similar performance and improved battery life.
  • The new microarchitecture was optimized to take advantage of the new capabilities of the 14 nm manufacturing process.
  • Intel has delivered the world's first 14nm technology in volume production. It uses second-generation Tri-gate (FinFET) transistors with industry-leading performance, power, density and cost per transistor.
  • Intel's 14 nm technology will be used to manufacture a wide range of high-performance to low-power products including servers, personal computing devices and Internet of Things.
  • The first systems based on the Intel Core M processor will be on shelves for the holiday selling season followed by broader OEM availability in the first half of 2015.
  • Additional products based on the Broadwell microarchitecture and 14 nm process technology will be introduced in the coming months.
Add your own comment

18 Comments on Intel Details Newest Microarchitecture and 14 Nanometer Manufacturing Process

#1
ur6beersaway
  • "The first systems based on the Intel Core M processor will be on shelves for the holiday selling season followed by broader OEM availability in the first half of 2015."
  • And then finally to us (DYI)'s Merry Christmas 2016
Posted on Reply
#2
XL-R8R
Aside from the obvious new socket needed; if the current trend of 5% performance over the last generation is maintained, then I cant really see any reason to throw away this 3570k of mine any time soon.... shit, even my old Xeon 3520 rig is still doing pretty damn good when it comes to gaming performance....

Sad, really.
Posted on Reply
#3
DayKnight
^ EXACTLY.

A week or two ago, I was looking at some of the next gen Intel's "supposed" processor/architecture lineup. I see NOT A REASON to do away with my processor. People who OC, THEY are in MUCH better shape. The current gen is a huge shame!. Plus Intel's retarded 'change the base every 2 years' philosophy. All that does is cement me to my current setup.

A six core Intel is what I want at the current 4570/K price.
Posted on Reply
#4
zenlaserman
And if frogs had wings, they wouldn't bump their ass a-hoppin.
Posted on Reply
#5
Aquinus
Resident Wat-man
I think that they need to explain "a new micro-architecture" a bit more before I can pass judgement on this.
Posted on Reply
#6
natr0n
Getting tired of intel.

Hope one day someone can overthrow them.
Posted on Reply
#7
nunomoreira10
if its so area efficient then they may as well put 6 cores as standard instead of better graphics, lack of competition is really hurting the consumer
Posted on Reply
#8
Scrizz
natr0nGetting tired of intel.

Hope one day someone can overthrow them.
what we need is the other chip companies to step up their game.
nunomoreira10if its so area efficient then they may as well put 6 cores as standard instead of better graphics, lack of competition is really hurting the consumer
AFAIK the x99 chips come 6 core standard.
Posted on Reply
#9
The Von Matrices
DayKnightA six core Intel is what I want at the current 4570/K price.
nunomoreira10if its so area efficient then they may as well put 6 cores as standard instead of better graphics, lack of competition is really hurting the consumer
You cannot reasonably blame a lack of competition for keeping >4 core CPUs from the mainstream.

The problem is that the mainstream market doesn't use applications that can consistently load >4 cores. Most enthusiasts can't even provide justification for having >4 cores. Simultaneously, the mainstream is demanding increased graphics performance. This means that it would be a poor business decision to focus upon adding more cores, exactly what most consumers cannot use, as opposed to graphics. AMD took that approach with its FX CPUs, and those chips never became big sellers because it's a design directly opposed to what the market demands, mobile devices.

It's not a conspiracy that both AMD and Intel have the same mainstream strategy (keep low core counts; improve graphics instead). If you add more competitors to the market you will get lower margins and thus lower prices for consumers. However, you still won't have companies designing mainstream-priced 6-core CPUs because adding competitors doesn't change the fact that there still isn't a large market to sell them to. If this weren't true then Intel's (currently available) 8-core Atoms would be flying off the shelves.
Posted on Reply
#10
micropage7
14 nm? so can we expect something that has more power and lower power consumption now?
Posted on Reply
#11
DayKnight
The Von MatricesYou cannot reasonably blame a lack of competition for keeping >4 core CPUs from the mainstream.

The problem is that the mainstream market doesn't use applications that can consistently load >4 cores. Most enthusiasts can't even provide justification for having >4 cores. Simultaneously, the mainstream is demanding increased graphics performance. This means that it would be a poor business decision to focus upon adding more cores, exactly what most consumers cannot use, as opposed to graphics. AMD took that approach with its FX CPUs, and those chips never became big sellers because it's a design directly opposed to what the market demands, mobile devices.

It's not a conspiracy that both AMD and Intel have the same mainstream strategy (keep low core counts; improve graphics instead). If you add more competitors to the market you will get lower margins and thus lower prices for consumers. However, you still won't have companies designing mainstream-priced 6-core CPUs because adding competitors doesn't change the fact that there still isn't a large market to sell them to. If this weren't true then Intel's (currently available) 8-core Atoms would be flying off the shelves.
Thank You. I stick with my 3570 then. :D

If any mid to major app can't use 4 cores, the developers should be fired. In this day and age, multi threading should be the base line, the standard, the norm.
Posted on Reply
#12
GreiverBlade
so this is Broadwell? or the next one after BW?

i just bought a Maximus VII Ranger (due to the compatibility with Broadwell, hum ... not really in fact :roll:) and a i5-4690K well i think i might be able to hold them for a while so... :D
Cristian_25HAdditional products based on the Broadwell microarchitecture and 14 nm process technology will be introduced in the coming months.
oh i see ... well let's just hope the Z97 compatibility with BW will not be "words in the wind"
Posted on Reply
#13
nunomoreira10
The Von MatricesYou cannot reasonably blame a lack of competition for keeping >4 core CPUs from the mainstream.

The problem is that the mainstream market doesn't use applications that can consistently load >4 cores.
you need both sides to drive innovation, the broad implementation of applications supporting multicores and a good set of people with multicore processors to motivate the development of software
Posted on Reply
#14
Aquinus
Resident Wat-man
nunomoreira10you need both sides to drive innovation, the broad implementation of applications supporting multicores and a good set of people with multicore processors to motivate the development of software
Explain this because as a developer it sounds like rhetorical nonsense. Do you write multi-threaded code? I get the distinct impression that you think it's easy to make concurrent software when that is definitely not the case. It's not always a matter of motivation. Some applications can only be made to run in parallel so much and an application with mostly code that has to run serially won't benefit from more cores, regardless of the different ways you develop it.

Either way, if you have shared state in your application there is no getting around this issue because coordinating state between threads takes time and CPU cycles, not to mention if you're updating information other threads need to know about, you need to be able to "lock" that variable to prevent deadlock or race conditions. So instantly your multi-threaded application has serial limitations, so it could be running on 10 threads but might not utilize more than 2 cores.
Posted on Reply
#15
The Von Matrices
AquinusExplain this because as a developer it sounds like rhetorical nonsense. Do you write multi-threaded code? I get the distinct impression that you think it's easy to make concurrent software when that is definitely not the case. It's not always a matter of motivation. Some applications can only be made to run in parallel so much and an application with mostly code that has to run serially won't benefit from more cores, regardless of the different ways you develop it.

Either way, if you have shared state in your application there is no getting around this issue because coordinating state between threads takes time and CPU cycles, not to mention if you're updating information other threads need to know about, you need to be able to "lock" that variable to prevent deadlock or race conditions. So instantly your multi-threaded application has serial limitations, so it could be running on 10 threads but might not utilize more than 2 cores.
I agree with you wholeheartedly.

For all the people who advocate more cores, I think it's worth nothing that this isn't a chicken and egg scenario. There's no hardware limitation stopping developers from developing code that is extremely parallel; with context switching you can still run highly parallel code on fewer core(s) than there are threads.

Remember that the ultimate goal is to increase performance. It doesn't matter how many cores your software can use; if for the same amount of time expenditure you can achieve greater performance gains by modifying code to run more efficiently (without regard to parallelism) than by going through the hassle of increasing parallelism, then the former is the better option.
Posted on Reply
#16
Aquinus
Resident Wat-man
The Von MatricesI agree with you wholeheartedly.

For all the people who advocate more cores, I think it's worth nothing that this isn't a chicken and egg scenario. There's no hardware limitation stopping developers from developing code that is extremely parallel; with context switching you can still run highly parallel code on fewer core(s) than there are threads.

Remember that the ultimate goal is to increase performance. It doesn't matter how many cores your software can use; if for the same amount of time expenditure you can achieve greater performance gains by modifying code to run more efficiently (without regard to parallelism) than by going through the hassle of increasing parallelism, then the former is the better option.
Right? It's amazing how people who aren't developers (or experienced multi-threaded developers,) think it's easy to write good multi-threaded code. Anyone can write a multi-threaded app, but if there is so much coordination that needs to be done, you could be using 20 threads and still only be using a single core worth or resources (scripting languages with a GIL make this very clear.)
Posted on Reply
#17
Frick
Fishfaced Nincompoop
XL-R8RAside from the obvious new socket needed; if the current trend of 5% performance over the last generation is maintained, then I cant really see any reason to throw away this 3570k of mine any time soon.... shit, even my old Xeon 3520 rig is still doing pretty damn good when it comes to gaming performance....

Sad, really.
So you don't actually have to spends tons of cash every other year to have a fast system, how is that a bad thing?
Posted on Reply
#18
DayKnight
The Von MatricesI agree with you wholeheartedly.

For all the people who advocate more cores, I think it's worth nothing that this isn't a chicken and egg scenario. There's no hardware limitation stopping developers from developing code that is extremely parallel; with context switching you can still run highly parallel code on fewer core(s) than there are threads.

Remember that the ultimate goal is to increase performance. It doesn't matter how many cores your software can use; if for the same amount of time expenditure you can achieve greater performance gains by modifying code to run more efficiently (without regard to parallelism) than by going through the hassle of increasing parallelism, then the former is the better option.
I conclude that you are better and have more knoowldge than Intel and AMD. ;)


Specially AMD as they threw the 8 core'r towards us.
Posted on Reply
Add your own comment
May 1st, 2024 22:54 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts