Tuesday, March 19th 2019

AMD Says Not to Count on Exotic Materials for CPUs in the Next Ten Years, Silicon Is Still Computing's Best Friend

AMD's senior VP of AMD's datacentre group Forrest Norrod, at the Rice Oil and Gas HPC conference, said that while graphene does have incredible promise for the world of computing, it likely will take some ten years before such exotic material are actually taken advantage off. As Norrod puts it, silicon still has a pretty straightforward - if increasingly complex - path down to 3 nanometer densities. And according to him, at the rate manufacturers are being able to scale down their production nodes further, the average time between node transitions stands at some four or five years - which makes the jump to 5 nm and then 3 nm look exactly some 10 years from now, where Norrod expects to go through two additional shrinking nodes for the manufacturing process.

Of course, graphene is being hailed as the next best candidate for taking over silicon's place at the heart of our more complex, high-performance electronics, due, in part, to its high conductivity independent of temperature variation and its incredible switching resistance - it has been found to be able to operate at Terahertz switching speeds. It's a 2D material, which means that implementations of it will have to occur in deposited sheets of graphene across some other material.
Of course, there's also the matter of quantum computing, on which Norrod takes a cautious, pondered approach: he expects the technology to flourish within the next 10 to 100 years, which, I think we can all agree, is a pretty safe bet for that to happen. Even though quantum computing is particularly geared for some specific workloads and wouldn't be able to completely replace said "traditional" processing designs and approaches, it's a technology that can be developed side by side with traditional computing (even if achieved with recourse to exotic materials). Source: PCGamesN
Add your own comment

33 Comments on AMD Says Not to Count on Exotic Materials for CPUs in the Next Ten Years, Silicon Is Still Computing's Best Friend

#1
Gungar
There is no significant performance increase since 22nm (4790k) so wtf is AMD talking about?
Posted on Reply
#2
zenlaserman
Makes sense, we've been looking at CPU overkill for a long time. Google became powerful starting with their ability to chain a bunch of low-end computers together to accomplish something bigger, and we've seen Intel and AMD's CPU strategy utilizing chiplets with an increasing focus in that regard, probably to spill into high-end GPUS.

For the most part, doesn't software still need to keep up? LOL..programmers, ugh.
Posted on Reply
#3
64K
I guess it's to be expected that some new material will only come along when silicon is no longer viable or too expensive to continue going to a lower process node. Necessity is the mother of invention.
Posted on Reply
#4
medi01
Good to hear we'll get 5nm then 3nm.
Although it might well take more than 10 years.

Gungar, post: 4015217, member: 163163"
There is no significant performance increase since 22nm (4790k) so wtf is AMD talking about?
Multithreading performance.
Posted on Reply
#5
londiste
AMD has no fabs and no R&D on manufacturing processes, do they?
Are they going by TSMC roadmaps?
Posted on Reply
#6
Readlight
Let's build from this super strong material brick if someone knows how to produce it.
Posted on Reply
#7
AmioriK
Can''t just keep adding more cores. At some point we're going to have to make each core a lot faster (new base material, for MUCH higher GHz). or drop "cores" altogether; organic computers are potentially the future. Our brains are pretty damn powerful and use around 20W of power:)
Posted on Reply
#8
Crackong
Gungar, post: 4015217, member: 163163"
There is no significant performance increase since 22nm (4790k) so wtf is AMD talking about?
22nm (4th gen) to 14nm (6/7/8/9 th gen)is just 1 generational improvement.
Please don't count the + + + + + after that, they are within the same 14nm processing.
Ignore the monopoly consumer market at that time, check the Xeon market where true improvements have been made.

Put 14nm Xeon v4 vs 22nm Xeon v3,
They packed 15% - 20% more cores into a CPU with the same Frequencies and TDP.
That's significant.
Posted on Reply
#9
Vayra86
Meanwhile, at AMD, many open doors were kicked in once more.
Posted on Reply
#10
bug
AMD may be saying that, but to whom?
As an end user, I couldn't care less if the CPU was built out of sand or iron. And those that actually build CPUs probably know what materials they can count on without advice from AMD.
Posted on Reply
#11
Gungar
Crackong, post: 4015270, member: 185495"
22nm (4th gen) to 14nm (6/7/8/9 th gen)is just 1 generational improvement.
Please don't count the + + + + + after that, they are within the same 14nm processing.
Ignore the monopoly consumer market at that time, check the Xeon market where true improvements have been made.

Put 14nm Xeon v4 vs 22nm Xeon v3,
They packed 15% - 20% more cores into a CPU with the same Frequencies and TDP.
That's significant.
That's the thing we need more powerful cores not more cores.
Posted on Reply
#12
Nxodus
AmioriK, post: 4015266, member: 186114"
or drop "cores" altogether
Oh my, AMD will be in trouble then:) That's their only selling point
Posted on Reply
#13
64K
That's what I think as well. I was reading an article on ArsTechnica a few years ago about the possibility of using carbon nanotubes. It was speculated that if the hurdles could be overcome then it might be possible to gain 5 times the speed for 1/5th the watts used compared to a similar node with silicon. Faster cores will be important as the years go by but efficiency is very desirable as well. Using so little power would mean longer battery life on mobile devices and laptops or lighter weight with reduced battery size.
Posted on Reply
#14
Imsochobo
Gungar, post: 4015325, member: 163163"
That's the thing we need more powerful cores not more cores.
Sorry, you can't have that.
game companies have started to learn that as well.
Posted on Reply
#15
xenocide
64K, post: 4015333, member: 148270"
That's what I think as well. I was reading an article on ArsTechnica a few years ago about the possibility of using carbon nanotubes. It was speculated that if the hurdles could be overcome then it might be possible to gain 5 times the speed for 1/5th the watts used compared to a similar node with silicon. Faster cores will be important as the years go by but efficiency is very desirable as well. Using so little power would mean longer battery life on mobile devices and laptops or lighter weight with reduced battery size.
The problem with Carbon Nanotubes is that they are incredibly difficult to work with, and insanely expensive to make. We've spent what, like 40 years developing Silicon-based computing devices? We've been working on Carbon Nanotubes for under a decade, and it's probably going to take ~10-15 years before we can make really complex devices using them without also spending an absolute fortune.
Posted on Reply
#16
bug
The mere fact we haven't identified a viable alternative to silicon should be enough to understand silicon isn't going anywhere anytime soon.
The fact that everybody is still exploring alternatives tells us we don't even have a viable candidate for the time being. The properties of the materials are understood pretty well already, what I think we need is engineering breakthroughs in lowering costs for some alternative.
Posted on Reply
#17
TristanX
AMD know best, better than Intel, TSMC and Samsung together. Let they begin to make quantum Ryzen
Posted on Reply
#18
Jozsef Dornyei
Creating programs running parallel is not rocket science. You need to start with the scheduler and write every program part short and scheduled by the scheduler. Windows is actually a scheduler. :-)
Problem starts when you are using an engine what is not written like that. The engine is out of your control.

There are still few programs written this way. The extra challenge is that many PCs have weaker cooling as they processor would require. The reason these systems are running is that they never ever run well optimized programs. If you create one many people will complain about suddenly unstable systems. 90+% load on the CPU will overheat it and the system will crash in 10 minutes or so. So you must build in a throttle what enables to limit the CPU usage to xx%. If you don't do that many PCs will not be able to run your program.
Posted on Reply
#19
bug
TristanX, post: 4015384, member: 177717"
AMD know best, better than Intel, TSMC and Samsung together. Let they begin to make quantum Ryzen
This isn't about AMD's technological prowess. It's just a PR piece that's rather misplaced/suspicious in the eyes of anyone with an interest in electronics.

The thing is, AMD did try to win this battle on technical merit alone, back in AthlonXP/64 days. We all know how that played out. So PR (even as hapless as in this instance) is still a move in the right direction for them. (Fwiw, I think in general their PR is doing a pretty good job.)

Jozsef Dornyei, post: 4015389, member: 172722"
Creating programs running parallel is not rocket science.
That much is true. Testing said programs and ensuring they do what you think they do, that's where the pain starts.
Posted on Reply
#20
Metroid
Graphene is the future which is not yet here, we actually need only 1 core which is 1000x faster than 500 silicon cpus aka i7 9700 at 5ghz and graphene can deliver that. Multicore is only acceptable when by a material design it cant have it all using only a single core.
Posted on Reply
#21
Caring1
AmioriK, post: 4015266, member: 186114"
Can''t just keep adding more cores. At some point we're going to have to make each core a lot faster (new base material, for MUCH higher GHz). or drop "cores" altogether; organic computers are potentially the future. Our brains are pretty damn powerful and use around 20W of power:)
I disagree.
If we want to use a brain as a model, we need a hell of a lot more cores, a lot smaller, and far more efficient.
Posted on Reply
#22
bug
Caring1, post: 4015419, member: 153156"
I disagree.
If we want to use a brain as a model, we need a hell of a lot more cores, a lot smaller, and far more efficient.
If you think the neuron makes up a core. But the neuron is really the transistor of the brain.
Posted on Reply
#23
moproblems99
xenocide, post: 4015346, member: 93980"
We've been working on Carbon Nanotubes for under a decade, and it's probably going to take ~10-15 years before we can make really complex devices using them without also spending an absolute fortune.
Funny, your prediction correlates with the article.
Posted on Reply
#24
Caring1
bug, post: 4015426, member: 157434"
If you think the neuron makes up a core. But the neuron is really the transistor of the brain.
More like the gates or on / off switches.
Posted on Reply
#25
TheGuruStud
Caring1, post: 4015434, member: 153156"
More like the gates or on / off switches.
I can't tell if this is a joke or not.
Posted on Reply
Add your own comment