Tuesday, September 26th 2017

Intel Introduces Neuromorphic Self-Learning Chip Codenamed "Loihi"

Intel has been steadily increasing its portfolio of products in the AI space, through the acquisition of multiple AI-focused companies such as Nervana, Mobileye, and others. Through its increased portfolio of AI-related IP, the company is looking to carve itself a slice of the AI computing market, and this sometimes means thinking inside the box more than outside of it. It really doesn't matter the amount of cores and threads you can put on your HEDT system: the human brain's wetware is still one of the most impressive computation machines known to man.

That idea is what's behind of neuromorphic computing, where chips are being designed to mimic the overall architecture of the human brain, with neurons, synapses and all. It marries the fields of biology, physics, mathematics, computer science, and electronic engineering to design artificial neural systems, mimicking the morphology of individual neurons, circuits, applications, and overall architectures. This, in turn, affects how information is represented, influences robustness to damage due to the distribution of workload through a "many cores" design, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change.
Intel's Loihi is hardly the first such neuromorphic computing chip to enter the market. The concept, coined by Carver Mead in 1980, was first taken up by universities (as early as 2006 in Georgia Tech), and has since been picked up by companies such as IBM. IBM's own TrueNorth neuromorphic CMOS integrated circuit (described as a many-core network processor on a chip) features a grand total of 4,096 cores. And it powers all of those with just 70 milliwatts of power, or about about 1/10,000th the power density of conventional microprocessors. Each of these cores simulates 256 artificial, programmable silicon "neurons" for a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them, which brings the total number of programmable synapses is just over 268 million. That's still a far-cry for yours truly mankind's average of 84 billion neurons, though.
Loihi, however, will feature "only" 130,000 neurons and 130 million synapses. There's fully asynchronous processing capability built-in on this chip, through its many-core mesh that supports a wide range of sparse, hierarchical and recurrent neural network topologies with each neuron capable of communicating with thousands of other neurons. Each of these neuromorphic cores includes a "learning engine", which allows the core's learning parameters to change on the fly according to the particular needs of a given workload, through supervised, unsupervised, reinforcement and other learning paradigms.

Intel is saying Loihi is especially capable in workloads such as the development and testing of several algorithms with high algorithmic efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation. This may all sound like science fiction, but remember these chips will still be fabricated on Intel's 14 nm process technology, and don't incorporate any exotic materials. It's still your old silicon doing its wonders.

Sources: Intel Newsroom, The Guardian, IBM's TrueNorth Wikipedia, Neuromorphc Engineering Wikipedia, The Future of AI, DARPA Synapse with True North (Second Picture)
Add your own comment

17 Comments on Intel Introduces Neuromorphic Self-Learning Chip Codenamed "Loihi"

#1
R00kie
Should've called him Steve, would've been less ominous sounding.
Posted on Reply
#2
Ferrum Master
Mmm... bad naming...

It sounds like lohi... and it means stupid in certain languages.
Posted on Reply
#3
R00kie
Ferrum MasterMmm... bad naming...

It sounds like lohi... and it means stupid in certain languages.
Exactly :roll:
Posted on Reply
#4
silentbogo
So, that's what they bought Altera for... One really big and very expensive self-programmable CPLD the size of a cockroach brain.
It may look promising, but I would still bet my money on lab-grown rat neurons.
Posted on Reply
#5
R00kie
silentbogoSo, that's what they bought Altera for... One really big and very expensive self-programmable CPLD the size of a cockroach brain.
It may look promising, but I would still bet my money on lab-grown rat neurons.
Or Bionic rats that are smarter than humans.
Posted on Reply
#6
silentbogo
gdallskOr Bionic rats that are smarter than humans.
No, that'll give me nightmares :fear:
Posted on Reply
#7
TheoneandonlyMrK
silentbogoSo, that's what they bought Altera for... One really big and very expensive self-programmable CPLD the size of a cockroach brain.
It may look promising, but I would still bet my money on lab-grown rat neurons.
I am with you here, the advances in bio electronic interfacing and the relative ease of reproduction makes this a very powerful potential,2027 im thinking Ed 209 myself with roland rats brain yeaaahhhhgg.


But on reading this i couldn't help but see the similatity (not in tech but its creation) of fpus and dare i say it GPUs , straight away my thoughts went to nice,

Now when can I get a pciex4 version to boost the Ai in my games , i want another unreal moment in gaming.

That moment of , shit ,, i like this ,this is ace.

Vr is that a bit but its had so many false starts and false dawns its a bit jaded(project cars 2 in occulus though wow i heart it).
Posted on Reply
#8
RejZoR
I wonder just 2 things. Can this make Crysis run faster and can this be used to make bots less dumb?

Also, calling it "Loki" would be better. It can help us and it'll eventually betray us...
Posted on Reply
#9
Vya Domus
They did something similar way back in the 90s if I remember correctly. They made a chip for neural networks that actually relied on analog circuitry.
Posted on Reply
#10
metalfiber
Plug into me I guarantee devotion
Plug into me and dedicate
Plug into me and I’ll save you from emotion
Plug into me and terminate
Accelerate, Utopian solution
Finally cure the Earth of man
Exterminate, speeding up the evolution
Set on course a master plan
Reinvent the earth inhabitant

Long live machine
The future supreme
Man overthrown
Spit out the bone.
Posted on Reply
#11
Steevo
gdallskShould've called him Steve, would've been less ominous sounding.
I, I sound ominous....... I really do, I swear it!!!

The issue with AI is when they put AI into chips and have them learn something, there is no exponential learning from the base like their is in humans, sure you can teach AI to recognize faces, but that is all it is good at, and it has a chance of still being wrong, just like humans, its just the margin of error that you are OK with. When they tech AI to design a better faster version of itself and it manages to do that again and again until there is a great working one, we probably won't be able to understand it or how it works, thus rendering the level of confidence almost nill, as there will be no way to check its work. One day I am sure we will achieve AI that works as it should, and it will probably be the day it tries to kill us all for being shitty humans. <- They should call the one that kills all humans Steve.
Posted on Reply
#12
TheoneandonlyMrK
SteevoI, I sound ominous....... I really do, I swear it!!!

The issue with AI is when they put AI into chips and have them learn something, there is no exponential learning from the base like their is in humans, sure you can teach AI to recognize faces, but that is all it is good at, and it has a chance of still being wrong, just like humans, its just the margin of error that you are OK with. When they tech AI to design a better faster version of itself and it manages to do that again and again until there is a great working one, we probably won't be able to understand it or how it works, thus rendering the level of confidence almost nill, as there will be no way to check its work. One day I am sure we will achieve AI that works as it should, and it will probably be the day it tries to kill us all for being shitty humans. <- They should call the one that kills all humans Steve.
Dunno i actually favour Dave as i know more dangerous Dave's then phsyco Steve's:)
Posted on Reply
#13
lordofdusk95
Am I the only one here still think if AI could take over and bring chaos to the world just like in the movies? eg. terminator, transcendece, I robot, etc
Sometimes I LOLed to myself, am I too much watching sci-fi movies or somethin? XD
Posted on Reply
#14
_JP_
theoneandonlymrkDunno i actually favour Dave as i know more dangerous Dave's then phsyco Steve's:)
If y'all played Fallout, you would know how dangerous a Gary is. :p
Posted on Reply
#15
thesmokingman
gdallskShould've called him Steve, would've been less ominous sounding.
I'd prefer HAL for the real heebie-jeebies.
Posted on Reply
Add your own comment
Apr 25th, 2024 17:34 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts