News Archive (1999-2012) | 2013-current at LinuxGizmos | Current Tech News Portal |    About   

IBM unveils brain-like cognitive computer

Aug 18, 2011 — by LinuxDevices Staff — from the LinuxDevices Archive — views

A DARPA-funded IBM Research “SyNAPSE” project has shown off prototypes of two cognitive computer processors that replicate the neural networks of the brain in silicon chips. Combining principles from supercomputing, nanotechnology, and neuroscience, the cognitive computing chips offer low power consumption and footprint and excel at multi-sensory applications, including playing “Pong,” checking inventory, or predicting the weather, says IBM.

On Aug. 18, IBM debuted the world's first cognitive computer chips, sometimes referred to as "cognizers." By replicating the functions of neurons and synapses in the human brain, the company has crafted the world's first chips aimed at taming the overwhelming wealth of information in multiple sensor data-streams, by learning to adapt like human brains.

The chips have already beat humans at the game "Pong" and promise to impart human-like abilities to all sorts of future cognitive computers.


Principal investigator Dharmendra Modha in front of the brain-wall at IBM Research, where the operation of the neurons and synapses in IBM's cognitive computers are visualized.

The new cognitive computer chips are being created under a DARPA (Defense Advanced Research Project Agency) program called SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics). IBM and its university research partners just received a $21 million infusion of cash to continue with the project.

Already the researchers have crafted a simulation of a complete cat brain — called Blue Matter — and more recently have mapped the entire wiring diagram of a monkey brain.

The first step was to use the latest neurological science to develop algorithms to accurately model these brain functions. The researchers then used nanotechnology to implement the core architecture of its cognitive computer chips by implementing its supercomputer model in nanoscale semiconductors.

IBM has developed two working prototypes, each fabricated with 45nm SOI-CMOS technology and containing 256 neurons. One core contains 262,144 programmable synapses and the other houses 65,536 learning synapses.

The IBM team has demonstrated simple applications, including navigation, machine vision, pattern recognition, associative memory, and classification. A cognitive computer could easily monitor thousands of sensor inputs measuring the ocean's temperature, pressure, wave height, acoustics and tides, then issue super-accurate tsunami warnings, suggests IBM.

In grocery stores, a sensor-studded stocking glove could monitor the color, smell, texture, and temperature of produce as it is being put on store shelves. Such a device could immediately flag any spoiled or contaminated items.

The ultimate goal of the project is to build an artificial brain similar in size, capability and power consumption to a human brain.

Moving beyond von Neumann

The traditional computers that we all use today are actually based on an antiquated design first proposed by John von Neumann in 1945. The so-called von Neumann architecture artificially separates programming from memory — putting the processor on one core and its memory on others.

Unfortunately, this division of labor makes it incredibly difficult to combine the knowledge gleaned from multiple data streams — the No. 1 unsolved problem facing computer systems today. Cognitive computers, on the other hand, replicate the way the human brain distributes processing and memory among the same circuitry, which in the brain is composed of neurons and synapses.

"Our chip represents a sharp departure in architecture from the tradition von Neumann computers," stated Dharmendra Modha, project leader for IBM Research. "All memory functions are integrated with program functions, creating a kind of social network of neurons with all their software stored in synapses."

Neurons are tiny cells that by their very nature integrate inputs from multiple sources. In the brain, these inputs are other neurons, of which there are billions. The brain uses its neurons together to solve problems by integrating the pulses received over dendrites from other neurons until a threshold is exceeded. At that point, it fires a pulse down its output axon, then resets and starts integrating anew.

Firing rates are typically at 10Hz, with power only being consumed when a pulse is actually produced. This enables ultra-low-power operation for brain-like computers even though they use billions of neurons.

The other major component of brains are and trillions of synapses that add weight to the pulses emitted by firing neurons. Even though a neuron might be connected to thousands of nearby neighbors, each of these connections is enhanced or mitigated by a synapse, which holds the memories of the brain.

Often used connections between two neurons grow a synaptic connection that is large and fast, thereby enabling it to quickly contribute to pattern recognition tasks, such as recognizing your friends' faces. Seldom used connections, on the other hand, are small and weak, thereby requiring extra time to recognize patterns under their control.

IBM claims this type of architecture has wide and deep applications that can easily make sense of today's increasingly common, multiple simultaneous sensor data streams.

R. Colin Johnson is a regular contributor to Smarter Technology, and the author of "Cognizers — Neural Networks and Machines that Think" (John Wiley & Sons, October 1988)


This article was originally published on LinuxDevices.com and has been donated to the open source community by QuinStreet Inc. Please visit LinuxToday.com for up-to-date news and articles about Linux and open source.



Comments are closed.