• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Introduces Neuromorphic Self-Learning Chip Codenamed "Loihi"

Raevenlord

News Editor
Joined
Aug 12, 2016
Messages
3,755 (1.16/day)
Location
Portugal
System Name The Ryzening
Processor AMD Ryzen 9 5900X
Motherboard MSI X570 MAG TOMAHAWK
Cooling Lian Li Galahad 360mm AIO
Memory 32 GB G.Skill Trident Z F4-3733 (4x 8 GB)
Video Card(s) Gigabyte RTX 3070 Ti
Storage Boot: Transcend MTE220S 2TB, Kintson A2000 1TB, Seagate Firewolf Pro 14 TB
Display(s) Acer Nitro VG270UP (1440p 144 Hz IPS)
Case Lian Li O11DX Dynamic White
Audio Device(s) iFi Audio Zen DAC
Power Supply Seasonic Focus+ 750 W
Mouse Cooler Master Masterkeys Lite L
Keyboard Cooler Master Masterkeys Lite L
Software Windows 10 x64
Intel has been steadily increasing its portfolio of products in the AI space, through the acquisition of multiple AI-focused companies such as Nervana, Mobileye, and others. Through its increased portfolio of AI-related IP, the company is looking to carve itself a slice of the AI computing market, and this sometimes means thinking inside the box more than outside of it. It really doesn't matter the amount of cores and threads you can put on your HEDT system: the human brain's wetware is still one of the most impressive computation machines known to man.

That idea is what's behind of neuromorphic computing, where chips are being designed to mimic the overall architecture of the human brain, with neurons, synapses and all. It marries the fields of biology, physics, mathematics, computer science, and electronic engineering to design artificial neural systems, mimicking the morphology of individual neurons, circuits, applications, and overall architectures. This, in turn, affects how information is represented, influences robustness to damage due to the distribution of workload through a "many cores" design, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change.





Intel's Loihi is hardly the first such neuromorphic computing chip to enter the market. The concept, coined by Carver Mead in 1980, was first taken up by universities (as early as 2006 in Georgia Tech), and has since been picked up by companies such as IBM. IBM's own TrueNorth neuromorphic CMOS integrated circuit (described as a many-core network processor on a chip) features a grand total of 4,096 cores. And it powers all of those with just 70 milliwatts of power, or about about 1/10,000th the power density of conventional microprocessors. Each of these cores simulates 256 artificial, programmable silicon "neurons" for a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them, which brings the total number of programmable synapses is just over 268 million. That's still a far-cry for yours truly mankind's average of 84 billion neurons, though.



Loihi, however, will feature "only" 130,000 neurons and 130 million synapses. There's fully asynchronous processing capability built-in on this chip, through its many-core mesh that supports a wide range of sparse, hierarchical and recurrent neural network topologies with each neuron capable of communicating with thousands of other neurons. Each of these neuromorphic cores includes a "learning engine", which allows the core's learning parameters to change on the fly according to the particular needs of a given workload, through supervised, unsupervised, reinforcement and other learning paradigms.

Intel is saying Loihi is especially capable in workloads such as the development and testing of several algorithms with high algorithmic efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation. This may all sound like science fiction, but remember these chips will still be fabricated on Intel's 14 nm process technology, and don't incorporate any exotic materials. It's still your old silicon doing its wonders.


View at TechPowerUp Main Site
 
Should've called him Steve, would've been less ominous sounding.
 
Mmm... bad naming...

It sounds like lohi... and it means stupid in certain languages.
 
So, that's what they bought Altera for... One really big and very expensive self-programmable CPLD the size of a cockroach brain.
It may look promising, but I would still bet my money on lab-grown rat neurons.
 
So, that's what they bought Altera for... One really big and very expensive self-programmable CPLD the size of a cockroach brain.
It may look promising, but I would still bet my money on lab-grown rat neurons.
Or Bionic rats that are smarter than humans.
 
So, that's what they bought Altera for... One really big and very expensive self-programmable CPLD the size of a cockroach brain.
It may look promising, but I would still bet my money on lab-grown rat neurons.
I am with you here, the advances in bio electronic interfacing and the relative ease of reproduction makes this a very powerful potential,2027 im thinking Ed 209 myself with roland rats brain yeaaahhhhgg.


But on reading this i couldn't help but see the similatity (not in tech but its creation) of fpus and dare i say it GPUs , straight away my thoughts went to nice,

Now when can I get a pciex4 version to boost the Ai in my games , i want another unreal moment in gaming.

That moment of , shit ,, i like this ,this is ace.

Vr is that a bit but its had so many false starts and false dawns its a bit jaded(project cars 2 in occulus though wow i heart it).
 
Last edited:
I wonder just 2 things. Can this make Crysis run faster and can this be used to make bots less dumb?

Also, calling it "Loki" would be better. It can help us and it'll eventually betray us...
 
They did something similar way back in the 90s if I remember correctly. They made a chip for neural networks that actually relied on analog circuitry.
 
Plug into me I guarantee devotion
Plug into me and dedicate
Plug into me and I’ll save you from emotion
Plug into me and terminate
Accelerate, Utopian solution
Finally cure the Earth of man
Exterminate, speeding up the evolution
Set on course a master plan
Reinvent the earth inhabitant

Long live machine
The future supreme
Man overthrown
Spit out the bone.
 
Should've called him Steve, would've been less ominous sounding.


I, I sound ominous....... I really do, I swear it!!!

The issue with AI is when they put AI into chips and have them learn something, there is no exponential learning from the base like their is in humans, sure you can teach AI to recognize faces, but that is all it is good at, and it has a chance of still being wrong, just like humans, its just the margin of error that you are OK with. When they tech AI to design a better faster version of itself and it manages to do that again and again until there is a great working one, we probably won't be able to understand it or how it works, thus rendering the level of confidence almost nill, as there will be no way to check its work. One day I am sure we will achieve AI that works as it should, and it will probably be the day it tries to kill us all for being shitty humans. <- They should call the one that kills all humans Steve.
 
I, I sound ominous....... I really do, I swear it!!!

The issue with AI is when they put AI into chips and have them learn something, there is no exponential learning from the base like their is in humans, sure you can teach AI to recognize faces, but that is all it is good at, and it has a chance of still being wrong, just like humans, its just the margin of error that you are OK with. When they tech AI to design a better faster version of itself and it manages to do that again and again until there is a great working one, we probably won't be able to understand it or how it works, thus rendering the level of confidence almost nill, as there will be no way to check its work. One day I am sure we will achieve AI that works as it should, and it will probably be the day it tries to kill us all for being shitty humans. <- They should call the one that kills all humans Steve.
Dunno i actually favour Dave as i know more dangerous Dave's then phsyco Steve's:)
 
Am I the only one here still think if AI could take over and bring chaos to the world just like in the movies? eg. terminator, transcendece, I robot, etc
Sometimes I LOLed to myself, am I too much watching sci-fi movies or somethin? XD
 
Skynet.jpg
 

Ain't it cyberdyne?
 
Back
Top