• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Moore's Law Buckles as Intel's Tick-Tock Cycle Slows Down

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,675 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Intel co-founder Gordon Moore's claim that transistor counts in microprocessors can be doubled with 2 years, by means of miniaturizing silicon lithography is beginning to buckle. In its latest earnings release, CEO Brian Krzanich said that the company's recent product cycles marked a slowing down of its "tick-tock" product development from 2 years to close to 2.5 years. With the company approaching sub-10 nm scales, it's bound to stay that way.

To keep Moore's Law alive, Intel adopted a product development strategy it calls tick-tock. Think of it as a metronome that give rhythm to the company. Each "tock" marks the arrival of a new micro-architecture, and each "tick" marks its miniaturization to a smaller silicon fab process. Normally, each year is bound to see one of the two in alternation.



"Penryn" was Intel's first 45 nm chip and miniaturization of "Conroe", "Nehalem" was a newer architecture on 45 nm, "Westmere" was its miniaturization to 32 nm, "Sandy Bridge" was a newer architecture on 32 nm, "Ivy Bridge" was its miniaturization to 22 nm, "Haswell" was a new architecture on 22 nm, and "Broadwell" is its miniaturization to 14 nm. "Skylake" is a new architecture on 14 nm. It was all well and good until "Broadwell."

Intel approached "Broadwell" slower than expected, because implementing the 14 nm node took longer. Intel launched its "Haswell Refresh" silicon to hold ground over mid-2014 to mid-2015. To compensate for lost ground, the company allowed "Broadwell" to be cannibalized by an on-schedule "Skylake" launch. The two less-than-memorable Broadwell desktop chips, the i7-5775C and i5-5675C, are tough to find in the retail market.

Krzanich suggested that the company could face a similarly slow product development cycle as it approaches the limits of how small it can make its chips using existing materials. With "Skylake" out this August, it could be a while before you see its miniaturization to 10 nm, codenamed "Cannonlake." IBM has had better luck with sub-10 nm. The company recently demonstrated a 7 nm chip built with a new silicon-germanium alloy substrate. IBM recently sold the division responsible for this in-toto, to GlobalFoundries, the principal foundry partner of AMD. Krzanich concluded that Moore's Law is still "alive and safe," and that it's not the first time it faced difficulties.

View at TechPowerUp Main Site
 
You know... people have been talking about this for like the last 10 years and it still hasn't actually happened.
 
Everyone should have saw this coming.

It cant just keep getting smaller. I know a lot of people think this it is magic, well it isn't, it is engineering and they literally aren't able to engineer it smaller, right now!

I am sure competition plays some part in this, the slower they make new chips the more they make on current chips, meaning they don't have to spend as much on engineering, research&design, and new market research. the competition is themselves.
 
I think what goes unsaid is that cost to research smaller processes is going to start climbing exponentially--at least until an alternative is found that doesn't rely on electrons.

IBM did demonstrate a 7nm process but that's hardly anything production ready. It was only a proof of concept.
 
You know... people have been talking about this for like the last 10 years and it still hasn't actually happened.
What do you mean it hasn't happened? We have been on the same 28nm process node for GPUs for over two years now, and now even the mighty Intel has reached that wall; the race to manufacture profitable and reliable consumer chips at 10nm and bellow will require pushing the limits of optics and current materials, and that's a fact.
 
Last edited:
I always laugh when anyone brings up Moore's Law. Moore's Law exists solely for/because of financial purposes (profit). It has nothing to do with actual limitations of technology. We'd be on god knows what kind of technology if money wasn't holding everything back. Why innoavte and invent new things or make things ridiculously faster if you can sell bunch of old ones at full price and bump the speed slightly every half a year instead of making giant leaps every year...

It didn't slow down because it's so hard to make good tech, it's slowed down because Intel has no interest in making things faster and cheaper with pretty much no real competition around...
 
The last I read from Intel is that they will be moving away from silicon after 10nm. Rumors have it that new materials will allow much higher clocks so there should be advances in performance even without sub-7nm node. Intel will possibly be using indium gallium arsenide.

http://arstechnica.com/gadgets/2015...d-to-10nm-will-move-away-from-silicon-at-7nm/
intel-10nm-challenges1.png

The graph on the left was what I was talking about. That line could turn sharply up unless breakthroughs are made.
 
Once scientists find a way to mass produce silicene or graphene, that'll be the time when going smaller would be much easier - and the limit then would be upto the size of an atom ;)
 
You know... people have been talking about this for like the last 10 years and it still hasn't actually happened.

Holy crap, dude. Back from the dead?
 
I always laugh when anyone brings up Moore's Law. Moore's Law exists solely for/because of financial purposes (profit). It has nothing to do with actual limitations of technology. We'd be on god knows what kind of technology if money wasn't holding everything back. Why innoavte and invent new things or make things ridiculously faster if you can sell bunch of old ones at full price and bump the speed slightly every half a year instead of making giant leaps every year...

It didn't slow down because it's so hard to make good tech, it's slowed down because Intel has no interest in making things faster and cheaper with pretty much no real competition around...

Hear hear, it helps when the full law is considerd:

The number of transistors economically viable on a chip will double every 2 years. The only reason it has been linked to node size is because that how you made transistors cheaper, Since that already is starting to fail.

Remember a more interesting graph showing the relative performance of CPUs in a comuter arcitecture class i had at university, it flattens out after Nehalem.
 
Ages ago I read an article about using synthetic diamond, dunno what happened to that.

Anyway I am expecting some sci-fi stuff the coming years.
 
What do you mean it hasn't happened? We have been on the same 28nm process node for GPUs for over two years now
Actually, over 3 and a half. The 28nm Tahiti (HD 7970) launched in Dec 2011 / Jan 2012.

Intel has still been keeping Moore's Law alive, but TSMC (and everyone else) has been behind it for a long time. With Broadwell that law no longer applies to Intel either.
 
Cost reduction is needed to justify new technology generations

I would prefer that the progress is driven by desire for more performance and work done, instead of someone greedy and stupid trying to save money and make more profit.
Actually, over 3 and a half. The 28nm Tahiti (HD 7970) launched in Dec 2011 / Jan 2012.

Intel has still been keeping Moore's Law alive, but TSMC (and everyone else) has been behind it for a long time. With Broadwell that law no longer applies to Intel either.

Of course, that law is not a law but nonsense. :laugh:
 
There were a lot of talks about magic of graphene. And then all went silent. I wonder if that's even still on a table...
 
Moore's Law exists solely for/because of financial purposes (profit). It has nothing to do with actual limitations of technology. We'd be on god knows what kind of technology if money wasn't holding everything back. Why innoavte and invent new things or make things ridiculously faster if you can sell bunch of old ones at full price and bump the speed slightly every half a year instead of making giant leaps every year...

^This.

There is also a difference between technology that the public knows about and uses and technology that is magnitudes better, but will never be advertised until years/decades later. It is simply drivelled out as the TPTB see fit.
 
There were a lot of talks about magic of graphene. And then all went silent. I wonder if that's even still on a table...

My brother is an EE in the LED department of a major multinational electronics manufacturer, and according to him, graphene, while promising, is still incredibly expensive to produce in anything other than powder, and methods for mass producing it in uniform sheets or wafers larger than a few MM is still beyond current tech. He doesn't predict there will be mass market implementation for at least another 10-15 years, and that is only if some major hurdles in production are overcome. He also said that there is a lot of internal discussion within this manufacturer, that there is about a 50% chance that graphene may never be an economically viable replacement for silicon. It has a lot of other uses (cell phone antennas have actually been one of the big areas of development) but as a replacement for the silicon substrate in processors, don't get your hopes up.
 
Last edited:
Oh well, this slowdown had to come at some point.

I remember it being quoted as doubling every 1.5 years many moons ago, so it's been on a slowdown for a long time. In fact, I'd say we actually hit the brick wall way back in 2004 when power usage and heat became too high, so clock rates got stuck at their current frequencies of maxing out at 3.5-4GHz. Who remembers Intels initial plans to have the Pentium 4 scale to 10GHz and beyond? They came crashing down because of this issue.

Intel and others then managed to work around it by increasing core count, transistor density and more efficient architecutures, but it's obvious that this strategy can't last forever, as we eventually reach the size of atoms, so there's a minimum size for the transistors.

This limitation is a real bummer, so I hope they find another way around it soon.
 
Watch AMD releases Zen and suddenly Intel has a whole new world of performance CPUs that put the current ones to shame.

So full of BS Intel...so full of BS...
 
Watch AMD releases Zen and suddenly Intel has a whole new world of performance CPUs that put the current ones to shame.

So full of BS Intel...so full of BS...
Unlikely. Both AMD and nVidia have been stuck with 28 nm process for their products while Intel can put out stuff on 14 nm process. Zen would need to have major architectural efficiencies to compensate for that.
 
Watch AMD releases Zen and suddenly Intel has a whole new world of performance CPUs that put the current ones to shame.

So full of BS Intel...so full of BS...

Maybe, if they would have 50 times the R&D budget, and a time machine that will take them 5 years into the future.
 
Unlikely. Both AMD and nVidia have been stuck with 28 nm process for their products while Intel can put out stuff on 14 nm process. Zen would need to have major architectural efficiencies to compensate for that.
Zen will be 14nm. But yeah, it likely won't catch Skylake, and if it does, Intel won't have a new product ready for a while (they could still probably bump clocks at the expense of power consumption, like Devil's Canyon). Making new fabs and architechtures is very hard nowadays, Intel doesn't have multiple generations of CPUs ready to counter AMD's possible comeback like some seem to believe.
 
It's still being researched as far as I know. Hopefully the obstacles will be overcome. It would be amazing to have a CPU run at 400 GHz.

http://www.technologyreview.com/view/518426/how-to-save-the-troubled-graphene-transistor/

There are SiGe transistors that have done 798Ghz.

Also single conventional Si transistors can also hit really really high clocks. The problem is that in a CPU you have several billion transistors and if one of them can't do more than 4Ghz at X voltage then the whole CPU has to run at 4Ghz. If you had CPUs with really really low transistor counts you could in theory end up with chips that can run super high clocks.

IBM has CPUs that ship at 5Ghz+.
 
  • Like
Reactions: 64K
Back
Top