• We've upgraded our forums. Please post any issues/requests in this thread.

IBM Unveils a ‘Brain-Like’ Chip With 4,000 Processor Cores

Mar 26, 2010
7,646 (2.71/day)
Jakarta, Indonesia
System Name micropage7
Processor Intel G4400
Motherboard MSI B150M Bazooka D3
Cooling Stock ( Lapped )
Memory 16 Gb Team Xtreem DDR3
Video Card(s) Nvidia GTX460
Storage Seagate 1 TB, 5oo Gb and SSD A-Data 128 Gb
Display(s) LG 19 inch LCD Wide Screen
Case HP dx6120 MT
Audio Device(s) Stock
Power Supply Be Quiet 600 Watt
Software Windows 7 64-bit
Benchmark Scores Classified
The human brain is the world’s most
sophisticated computer, capable of learning new
things on the fly, using very little data. It can
recognize objects, understand speech, respond to
change. Since the early days of digital technology,
scientists have worked to build computers that
were more like the three-pound organ inside your
Most efforts to mimic the brain have focused on
software, but in recent years, some researchers
have ramped up efforts to create neuro-inspired
computer chips that process information in
fundamentally different ways from traditional
hardware. This includes an ambitious project
inside tech giant IBM, and today, Big Blue
released a research paper describing the latest
fruits of these labors. With this paper, published
in the academic journal Science, the company
unveils what it calls TrueNorth, a custom-made
“brain-like” chip that builds on a simpler
experimental system the company released in
TrueNorth comes packed with 4,096 processor
cores, and it mimics one million human neurons
and 256 million synapses, two of the fundamental
biological building blocks that make up the
human brain. IBM calls these “spiking neurons.”
What that means, essentially, is that the chip can
encode data as patterns of pulses, which is
similar to one of the many ways neuroscientists
think the brain stores information.
“This is a really neat experiment in architecture,”
says Carver Mead, a professor emeritus of
engineering and applied science at the California
Institute of Technology who is often considered
the granddaddy of “neuromorphic” hardware. “It’s
a fine first step.” Traditional processors—like the
CPUs at the heart of our computers and the GPUs
that drive graphics and other math-heavy tasks—
aren’t good at encoding data in this brain-like
way, he explains, and that’s why IBM’s chip
could be useful. “Representing information with
the timing of nerve pulses…that’s just not been a
thing that digital computers have had a way of
dealing with in the past,” Mead says.
IBM has already tested the chip’s ability to drive
common artificial intelligence tasks, including
recognizing images, and according to the
company, its neurons and synapses can handle
such tasks with usual speed, using much less
power than traditional off-the-shelf chips. When
researchers challenged the thing with DARPA’s
NeoVision2 Tower dataset—which includes images
taken from video recorded atop Stanford
University’s Hoover Tower—TrueNorth was able
to recognize things like people, cyclists, cars,
buses, and trucks with about 80 percent
accuracy. What’s more, when the researchers
then fed TrueNorth streaming video at 30 frames
per second, it only burned 63 mW of power as it
processed the data in real time.
“There’s no CPU. There’s no GPU, no hybrid
computer that can come within even a couple of
orders of magnitude of where we are,” says
Dharmendra Modha, the man who oversees the
project. “The chip is designed for real-time power
efficiency.” Nobody else, he claims, “can deliver
this in real time at the vast scales we’re talking
about.” The trick, he explains, is that you can tile
the chips together easily to create a massive
neural network. IBM created a 16-chip board just
a few weeks ago that can process video in real
Both these chips and this board are just research
prototypes, but IBM is already hawking the
technology as something that will revolutionize
everything from cloud services, supercomputers,
and smartphone technology. It’s “a new machine
for a new era,” says Modha. “We really think this
is a new landmark in the history of brain-inspired
computing.” But others question whether this
technology is all that different from current
systems and what it can actually do.
Beyond von Neumann
IBM’s chip research is part of the SyNAPSE
project, short for Systems of Neuromorphic
Adaptive Plastic Scalable Electronics, a massive
effort from DARPA, the Defense Department’s
research arm, to create a brain-like hardware.
The ultimate aim of the project—which has
invested about $53 million since 2008 in IBM’s
project alone—is to create hardware that breaks
the von Neumann paradigm, the standard way of
building computers.
In a von Neumann computer, the storage and
handling of data is divvied up between the
machine’s main memory and its central
processing unit. To do their work, computers
carry out a set of instructions, or programs,
sequentially by shuttling data from memory
(where it’s stored) to the CPU (where it’s
crunched). Because the memory and CPU are
separated, data needs to be transferred
This creates a bottleneck and requires lots of
energy. There are ways around this, like using
multi-core chips that can run tasks in parallel or
storing things in cache—a special kind of memory
that sits closer to the processor—but this buys
you only so much speed-up and not so much in
power. It also means that computers are never
really working in real-time, says Mead, because of
the communication roadblock.
We don’t completely understand how the brain
works. But in his seminal work, The Computer
and the Brain, as John von Neumann himself
said that brain is something fundamentally
different from the computing architecture that
bears his name, and ever since, scientists have
been trying to understand how the brain encodes
and processes information with the hope that they
can translate that into smarter computers.
Neuromorphic chips developed by IBM and a
handful of others don’t separate the data-storage
and data-crunching parts of the computer.
Instead, they pack the memory, computation and
communication parts into little modules that
process information locally but can communicate
with each other easily and quickly. This, IBM
researchers say, resembles the circuits found in
the brain, where the separation of computation
and storage isn’t as cut and dry, and it’s what
buys the thing added energy efficiency—arguably
the chip’s best selling point to date.
But Can It Learn?
But some question how novel the chip really is.
“The good point about the architecture is that
memory and computation are close. But again, if
this does not scale to state-of-art problems, it
will not be different from current systems where
memory and computation are physically
separated,” says Eugenio Culurciello, a professor
at Purdue University, who works on neuromorphic
systems for vision and helped develop the
NeuFlow platform in neural-net pioneer Yann
LeCun’s lab at NYU.

So far, it’s unclear how well TrueNorth performs
when it’s put to the test on large-scale state-of-
the-art problems like recognizing very many
different types of objects. It seems to have
performed well on a simple image detection and
recognition tasks using used DARPA’s NeoVision2
Tower dataset. But as some critics point out,
that’s only five categories of objects. The object
recognition software used at Baidu and Google,
for example, is trained on the ImageNet database,
which boasts thousands of object categories.
Modha says they started with NeoVision because
it was a DARPA-mandated metric, but they are
working on other datasets including ImageNet.
Others say that in order to break with current
computing paradigms, neurochips should learn.
“It’s definitely an achievement to make a chip of
that scale…but I think the claims are a bit
stretched because there is no learning happening
on chip,” says Nayaran Srinivasa, a researcher at
HRL Laboratories who’s working on similar
technologies (also funded by SyNAPSE). “It’s not
brain-like in a lot of ways.” While the
implementation does happen on TrueNorth, all the
learning happens off-line, on traditional
computers. “The von Neumann component is
doing all the ‘brain’ work, so in that sense it’s
not breaking any paradigm.”
To be fair, most learning systems today rely
heavily on off-line learning, whether they run on
CPUs or faster, more power-hungry GPUs. That’s
because learning often requires reworking the
algorithms and that’s much harder to do on
hardware because it’s not as flexible. Still, IBM
says on-chip learning is not something they’re
ruling out.
Critics say the technology still has very many
tests to pass before it can supercharge data
centers or power new breeds of intelligent phones,
cameras, robots or Google Glass-like
contraptions. To think that we’re going to have
brain-like computer chips in our hands soon
would be “misleading,” says LeCun, whose lab
has worked on neural-net hardware for years.
“I’m all in favor of building special-purpose chips
for running neural nets. But I think people should
build chips to implement algorithms that we know
work at state of the art level,” he says. “This
avenue of research is not going to pan out for
quite a while, if ever. They may get neural net
accelerator chips in their smartphones soonish,
but these chips won’t look at all like the IBM
chip. They will look more like modified GPUs.”



"I go fast!1!11!1!"
Oct 13, 2008
20,921 (6.24/day)
System Name BY-2015
Processor Intel Core i7-6700K (4 x 4.00 GHz) w/ HT and Turbo on
Motherboard MSI Z170A GAMING M7
Cooling Scythe Kotetsu
Memory 2 x Kingston HyperX DDR4-2133 8 GiB
Video Card(s) PowerColor PCS+ 390 8 GiB DVI + HDMI
Storage Crucial MX300 275 GB, Seagate 6 TB 7200 RPM
Display(s) Samsung SyncMaster T240 24" LCD (1920x1200 HDMI) + Samsung SyncMaster 906BW 19" LCD (1440x900 DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay
Audio Device(s) Realtek Onboard, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse SteelSeries Sensei RAW
Keyboard Tesoro Excalibur
Software Windows 10 Pro 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I suspect what they did is use non-volatile memory in place of volatile caches in the processor. This eliminates the need for the core to leave itself except when it needs to obtain information from another core.

As to making a computer learn, I don't think there is a hardware means to achieve that now or in the near future. Neurons are living cells that are capable of changing and forming new connections. There's nothing in electronic hardware like that. I think making computers learn is still going to have to stem from software that edits its own code.
Apr 2, 2011
2,442 (1.00/day)
Because I trust Wired for my technological news...

I also trust the Guardian and Onion to be 100% accurate and have absolutely no joking material.

In all seriousness, this is a repost of old news. Way back in 2009, this kind of thing was all the rage:http://discovermagazine.com/2009/oct/06-brain-like-chip-may-solve-computers-big-problem-energy. What we have here is basically just a big company introducing the same technology. It's like seeing a Core2duo, and being surprised when years later they introduce a sandy bridge based processor.

What IBM is failing to state is if they've overcome the "misfiring" issues, and whether or not their evolutionary processor can actually do any useful work. Wake me when they've got a good answer to both of these questions, because a 99% leap in efficiency mean nothing if you can't do anything useful with it.


Allow me to retract the anger about doing anything useful. This chip can identify approximately shaped objects, better than our current binary computers are able to.

Of course, a one in five inaccuracy rate isn't exactly burning up the world given the insane costs of a niche new processor architecture. I love the facts that they draw a comparison with deep blue, but it's an apples and oranges situation. Deep blue is an encyclopedia with reasonably good algorithms for finding data based on vocal cues. This is a processor with a new architecture designed to access information differently.
Last edited:
Dec 6, 2005
9,806 (2.23/day)
Manchester, NH
System Name Working on it ;)
Processor I7-4790K
Motherboard MSI Z97
Cooling Be Quiet Pure Rock Air
Memory 16GB 4x4 G.Skill CAS9 2133 Sniper
Video Card(s) Intel IGP (Dedicated GPU TBD)
Storage WD 320 / 500KS / 500KS / 640KS / 640LS / 640LS / 640LS / 1TBFAEX and a NAS with 2x2Tb WD Black
Display(s) 24" DELL 2405FPW
Case Rosewill Challenger
Audio Device(s) Onboard + HD HDMI
Power Supply Corsair HX750 (love it)
Mouse Logitech G5
Software Win 7 Pro
I suspect what they did is use non-volatile memory in place of volatile caches in the processor. This eliminates the need for the core to leave itself except when it needs to obtain information from another core.

As to making a computer learn, I don't think there is a hardware means to achieve that now or in the near future. Neurons are living cells that are capable of changing and forming new connections. There's nothing in electronic hardware like that. I think making computers learn is still going to have to stem from software that edits its own code.
Yea we're pretty far away from this:

Last edited by a moderator:
Jun 3, 2007
22,400 (5.82/day)
'Merica. The Great SOUTH!
System Name The Mailbox 4.5
Processor Intel i7 2600k @ 4.2GHz
Motherboard Gigabyte Z77X-UP5 TH Intel LGA 1155
Cooling Scythe Katana 4
Memory G.SKILL Sniper Series 16GB DDR3 1866: 9-9-9-24
Video Card(s) MSI 1080 "Duke" with 8Gb of RAM. Boost Clock 1847 MHz
Storage 256Gb M4 SSD, 500Gb WD (7200) 128Gb Agelity 4 SSD
Display(s) LG 29" Class 21:9 UltraWide® IPS LED Monitor 2560 x 1080
Case Cooler Master 922 HAF
Audio Device(s) SupremeFX X-Fi with Bose Companion 2 speakers.
Power Supply SeaSonic X Series X650 Gold
Mouse SteelSeries Sensei (RAW) and a Wacom Intuos 4 tablet.
Keyboard Razer BlackWidow
Software Windows 10 Pro (64-bit)
Benchmark Scores Benching is for bitches.
Yea we're pretty far away from this:

No we are not. My daughter already has those contacts and I've sported hair gel like that in the 90's. Get with the times son.
Last edited by a moderator: