Analogue and digital circuitry.

Neuromorphic computing is a method that relies on the use of neuromorphic chips, which are inspired by the human brain and use analogue computing units to mimic the plasticity of the biological nervous system. Contrary to digital chips, which can only differentiate between ‘on’ and ‘off’ signals, analogue chips can deliver a continuous stream of information.

Until today, classic computer chip designs based on the von Neumann architecture, which separates memory and the processor, have dominated and can be found in everything from smartphones to personal computers. This architecture has been the standard in computer systems since the 1950s and is based on splitting data storage and processing using a bus system, which transfer data from the memory to the processor to be processed before being sent back again. Unfortunately, this bus system takes time and an awful lot of energy, but as computers are expected to take on even more AI-based tasks, they need to be able to deliver even more power and current designs mean they will very quickly reach the limits of what they are able to deliver. Compared to the human brain, which consumes 20 to 25 W, a suitably power computer would need about a megawatt of energy. In a nutshell, today’s processors are nowhere near as energy efficient as biological neuromorphic systems.

Neuromorphic hardware uses.

Standard computer chips are not going to be consigned to history. On the contrary, digital processes are by far the best choice when it comes to simple computing operations where clear results are needed. Computing as we know isn’t going to be completely replaced, though as the two systems can complement it other very well. The programmatic computing needed for sequential data processing and definable controls are a breeze for digital chips whereas neuromorphic computers are able to flex their muscles when it comes to parallelising pattern detection and predictions with complex data. They are capable of learning and adapting, have no problems handling unpredictable environments and when it comes to facial recognition, are better able to identify areas of the eye, noses and mouths. Neuromorphic chips can generate enormous computing power by saving and processing information simultaneously. Similar to human brains.

Analogue chips mimic the synapses, nerve cells and whole sections of the brain in order to optimise ongoing computing processes. The aim of neuromorphic computing is to imitate the self-organising and self-regulating the brain’s systems using circuits and hardware to create smart neuromorphic systems that can resolve issues they were not initially programmed to. This makes them suitable for implanting into damaged human tissue like the retina, but could also be used to help optimise robots’ movements and their recognition of patterns through self-learning artificial eyes.

Future supercomputers.

Researchers are currently working on advancing neuromorphic computing through a host of different approaches and the European Union’s Human Brain Project is funding numerous projects. Two of them—SpiNNaker (Manchester) and BrainscaleS (Heidelberg)—are leveraging very different, yet promising methods in order to make a neuromorphic supercomputer a reality. The British project, SpiNNaker (Spiking Neural Network Architecture), is working on a massively parallel computing platform using conventional multi-core processors based on ARM architecture that interconnects a network via a packet router to simulate the exchange of action potentials. The ARM processors used are particularly energy efficient. SpiNNaker is inspired by the fundamental structure and function of the human brain.

The system was commissioned at the end of 2018 and currently uses one million cores with 18 cores per processor, each with a 128 MB of shared RAM. BrainScaleS, on the other hand, is based on analogue and mixed-signal emulations of four million neurons and one billion synapses with digital connectivity on 20 silicon wafers used as semiconductor material, accelerating computing processes to an unbelievable level. Simulations on standard supercomputer are up to 1,000-times slower than biological neural networks and not able to leverage multiple time scales—ranging from milliseconds to years—for learning and development.

Future supercomputers.

Researchers are currently working on advancing neuromorphic computing through a host of different approaches and the European Union’s Human Brain Project is funding numerous projects. Two of them—SpiNNaker (Manchester) and BrainscaleS (Heidelberg)—are leveraging very different, yet promising methods in order to make a neuromorphic supercomputer a reality. The British project, SpiNNaker (Spiking Neural Network Architecture), is working on a massively parallel computing platform using conventional multi-core processors based on ARM architecture that interconnects a network via a packet router to simulate the exchange of action potentials. The ARM processors used are particularly energy efficient. SpiNNaker is inspired by the fundamental structure and function of the human brain.

The system was commissioned at the end of 2018 and currently uses one million cores with 18 cores per processor, each with a 128 MB of shared RAM. BrainScaleS, on the other hand, is based on analogue and mixed-signal emulations of four million neurons and one billion synapses with digital connectivity on 20 silicon wafers used as semiconductor material, accelerating computing processes to an unbelievable level. Simulations on standard supercomputer are up to 1,000-times slower than biological neural networks and not able to leverage multiple time scales—ranging from milliseconds to years—for learning and development.

Billions of neurons. Trillions of synapses.

Neuromorphic computing is capable of taking AI to a level that we cannot even fathom, but it will take some time until these computers are anywhere near able to perform at the same level as a human. The most advanced chip out there at the moment has been developed by Intel and has over 100 million neurons. The human brain, however, contains 90 billion neurons, connected by several trillion synapses. Despite this, neuromorphic computing offers huge potential for numerous edge AI applications such as smart cities, smart mobility, robotics and medicine. Martin Ziegler, Professor of Neuromorphic Electronics at the Technical University of Illmenau believes that the first neuromorphic IT systems will go into operation around 2025, while researchers foresee fully neuromorphic hardware on the market by the end of the decade.

Billions of neurons. Trillions of synapses.

Neuromorphic computing is capable of taking AI to a level that we cannot even fathom, but it will take some time until these computers are anywhere near able to perform at the same level as a human. The most advanced chip out there at the moment has been developed by Intel and has over 100 million neurons. The human brain, however, contains 90 billion neurons, connected by several trillion synapses. Despite this, neuromorphic computing offers huge potential for numerous edge AI applications such as smart cities, smart mobility, robotics and medicine. Martin Ziegler, Professor of Neuromorphic Electronics at the Technical University of Illmenau believes that the first neuromorphic IT systems will go into operation around 2025, while researchers foresee fully neuromorphic hardware on the market by the end of the decade.

This is an excerpt of an article in the print edition of Bechtle update 02/2018.
 

PRINT EDITION