I recently got the chance to see a talk at the Centre for Theoretical Neuroscience (CTN) by renowned computer engineer Dr. Steve Furber. Dr. Furber is probably best known for his involvement in developing the ARM microprocessor, the chip that’s responsible for the many of the tablets and smartphones we use everyday. He has since moved to the University of Manchester where he has been building biologically inspired computer hardware.
The focus of his talk was on his lab’s SpiNNaker project (which stands for “Spiking Neural Network Architecture”). SpiNNaker is a computer architecture that’s meant to simulate the human brain. It’s a part of the emerging neuromorphic technologies that have been garnering a fair amount of recent press (some examples include IBM’s TrueNorth, Stanford University’s Neurogrid, and the University of Heidelberg’s HiCANN ).
The main goal of the SpiNNaker project is to build computer hardware that simulates large-scale neural networks in biologically real-time. Its development is based on three main principles:
- The topology of the neural systems it attempts to simulate should be done virtually.
- It should abide by the principles of bounded asynchrony, which best describe cell-to-cell interactions (see this article for more information).
- The architecture should be as energy efficient as possible.
With these goals in mind, Dr. Furber’s lab has spent the last few years developing the SpiNNaker chip. The chip is composed of 20 low-power ARM968 cores that simulate spiking neurons. A ‘spike’ is generated when a core sends a packet of information to an on-chip router. This router then sends the packet to other cores found either on the same or on different chips. Each SpiNNaker chip simulates approximately 1000 neurons, and can be connected with other chips to form large-scale neural networks (the largest of which contains 216 chips, simulating over 1 million neurons). Importantly, these chips can work simultaneously, providing a massively parallel architecture to simulate neural systems (click here for a more detailed description of the SpiNNaker architecture).
Dr. Furber emphasized that one of the most important innovations from this architecture is the routing system SpiNNaker uses to connect its cores. To reduce the information transmitted by each core, the packets they send only contain information about their source (i.e., which core has fired), without specifying a destination. To find a packet’s destination, each router attempts to match a packet’s source with a lookup table of potential destinations. These lookup tables are not comprehensive (i.e., do not contain all of the potential connections); instead, each router contains a distinct subset of possible destinations that are preloaded by the modeler. If a destination is not found for a packet’s specific source, a default route is implemented - a packet is sent to a neighboring chip opposite the one it came from. These steps are repeated until a destination is found for a specific packet.
This method of communication between cores dramatically reduces both the energy required to communicate between cores, and the memory requirements for each chip. Additionally, since lookup tables are preset to match the neural topology they are meant to simulate, they can be configured to flexibility represent a number of different neural architectures virtually, without requiring any direct changes to the hardware.
Prior to this talk, I had heard of, but never really looked into neuromorphic technology. Although the broad uses for this type of technology aren’t immediately apparent, the implications for neuroscience research are definitely exciting. They offer a low-powered, massively parallel architecture that can flexibly be configured to simulate a number of different neural architectures. With big firms like IBM investing considerable resources in the development of these types of chips, I’m excited at the prospect of how this technology will help inform our understanding of the human brain.
That said, one can’t help at think that someday neuromorphic technologies could lead to something more ominous…