What is neuromorphic computing?
Neuromorphic computing, also known as neuromorphic engineering, is a computing approach that mimics the way the human brain works. It involves designing hardware and software that simulate the brain’s neuronal and synaptic structures and functions to process information.
Neuromorphic computing mimics the functioning of the brain to perform information processing tasks. The recent rise of neuromorphic circuits and sensors is reflected in particular by the increasing involvement of large companies (e.g., Intel, IBM, Sony) in the design of this hardware. Neuromorphic circuits are based on models of artificial neurons communicating – similar to biological neurons – by means of pulses produced over time. Among neuromorphic sensors, event-driven cameras operate in a manner analogous to the retina by transmitting information in the form of a pulse only when a local change in brightness is detected, allowing very high sampling speed despite low power consumption.
Neuromorphic computing may seem like a new field, but its origins date back to the 1980s. This was the decade in which Misha Mahowald and Carver Mead developed the first silicon retina and cochlea and the first silicon neurons and synapses that pioneered the neuromorphic computing paradigm. 1
Today, as artificial intelligence (AI) systems scale, they will require cutting-edge hardware and software behind them. Neuromorphic computing can act as an accelerator for AI growth, drive high-performance computing , and serve as one of the building blocks of artificial superintelligence . Experiments are even underway to combine neuromorphic computing with quantum computing .
Neuromorphic computing has been cited by management consultancy Gartner as one of the top emerging technologies for businesses. 3 Similarly, professional services firm PwC notes that neuromorphic computing is an essential technology for organizations to explore, as it is progressing rapidly but is not yet mature enough to become mainstream. 4
How neuromorphic computing works
Since neuromorphic computing is inspired by the human brain, it borrows heavily from biology and neuroscience.
According to the Queensland Brain Institute, neurons “are the fundamental units of the brain and nervous system.” 5 As messengers, these nerve cells transmit information between different areas of the brain and to other parts of the body. When a neuron fires, or “fires,” it triggers the release of chemical and electrical signals that travel through a network of connection points called synapses, allowing neurons to communicate with each other. 6
These neurological and biological mechanisms are modeled in neuromorphic computing systems using spiking neural networks (SNNs). A spiking neural network is a type of neural network composed of spiking neurons and synapses.
Firing neurons store and process data similarly to biological neurons, and each neuron has its own charge, delay, and threshold values. Synapses create pathways between neurons and also have delay and weight values associated with them. These values (neuronal charges, neuronal and synaptic delays, neuronal thresholds, and synaptic weights) can be programmed within neuromorphic computing systems .
In neuromorphic architecture, synapses are represented as transistor-based synaptic devices, which use circuits to transmit electrical signals. Synapses often include a learning component, altering their weight values over time based on activity within the spiking neural network .
Unlike conventional neural networks, SNNs operate with time in mind. A neuron’s charge value accumulates over time; when that charge reaches a threshold value associated with the neuron, it fires and propagates information throughout its synaptic network. However, if the charge value falls below the threshold, it dissipates and eventually “leaks.” Furthermore, SNNs are event-driven, with neuronal and synaptic delay values that allow for asynchronous information dissemination .
Neuromorphic hardware
In recent decades, many advances in neuromorphic computing have come in the form of neuromorphic hardware.
In academia, one of the first implementations included Stanford University’s Neurogrid, whose mixed analog-digital multi-chip system can “simulate a million neurons with billions of synaptic connections in real time.” 8 Meanwhile, the IMEC research center created a self-learning neuromorphic chip. 9
Government bodies have also supported neuromorphic research efforts. The European Union’s Human Brain Project, for example, was a 10-year initiative ending in 2023 and aimed to better understand the brain, find new treatments for brain diseases, and develop new brain-inspired computing technologies.
These technologies include the large-scale neuromorphic machines SpiNNaker and BrainScaleS. SpiNNaker runs in real time on multi-core digital chips, with a packet-based network for impulse sharing optimization. BrainScaleS is an accelerated machine that emulates analog electronic models of neurons and synapses. It has a first-generation wafer-scale system-on-chip (called BrainScaleS-1) and a second-generation single-chip system (called BrainScaleS-2). 10
Within the technology sector, neuromorphic processors include Intel’s Loihi, GrAI Matter Labs’ NeuronFlow, and IBM’s next-generation TrueNorth and NorthPole neuromorphic chips.
Most neuromorphic devices are made of silicon and use CMOS (complementary metal-oxide-semiconductor) technology. But researchers are also exploring new types of materials, such as ferroelectrics and phase-change materials. Non-volatile electronic memory elements called memristors (a portmanteau of “memory” and “resistor”) are another module for memory placement and data processing in spiking neurons.
Neuromorphic computing algorithms
In the software field, the development of training and learning algorithms for neuromorphic computing involves both machine learning and other techniques. Some of these are listed below: 7
Deep learning
To perform inference, pre-trained deep neural networks can be transformed into spiking neural networks using mapping strategies such as weight normalization or activation. A neural network can also be trained so that its neurons activate as spiking neurons.Evolutionary algorithms
These bio-inspired algorithms employ principles of biological evolution, such as mutation, reproduction, and selection. Evolutionary algorithms can be used to design or train SNNs, changing and optimizing their parameters (delays and thresholds, for example) and their structure (the number of neurons and the method of linking via synapses, for example) over time.Graphics
Spike neural networks lend themselves well to graphical representation, where an SNN takes the form of a directed graph. When one of the nodes in the graph reaches a spike, the time at which the other nodes also reach a spike coincides with the length of the shortest path from the source node.Plasticity
In neuroscience, neuroplasticity refers to the ability of the human brain and nervous system to modify their neuronal pathways and synapses in response to injury. In neuromorphic architecture, synaptic plasticity is typically implemented through spike-timing-dependent plasticity. This operation adjusts synapse weights based on the relative timing of neurons’ spikes.Reservoir calculation
Reservoir computing, which is based on recurrent neural networks , uses a “reservoir” to send inputs to a higher-dimensional computational space, with a trained reading mechanism to read the output from the reservoir.
In neuromorphic computing, input signals are fed into a spiking neural network, which acts as a reservoir. The SNN is not trained; instead, it relies on the recurrent connections within its network along with synaptic delays to map inputs to a higher-dimensional computational space.
Benefits of neuromorphic computing
Neuromorphic systems hold great promise from a computational perspective. These are some of the potential benefits offered by this type of computing architecture:
Adaptability
As a brain-inspired technology, neuromorphic computing also involves the notion of plasticity. Neuromorphic devices are designed for real-time learning, continuously adapting to evolving stimuli in the form of inputs and parameters. This means they could excel at solving novel problems.
Energy efficiency
As mentioned above, neuromorphic systems are event-driven, with neurons and synapses processing in response to other neurons’ impulses. As a result, only the segment computing impulses consumes energy while the rest of the network remains idle. This leads to more efficient energy consumption.
High performance
Most modern computers, also known as von Neumann computers, have separate central processing units and memory units, and data transfer between these units can cause a bottleneck that affects speed. On the other hand, neuromorphic computing systems store and process data in individual neurons, resulting in lower latency and faster computation compared to von Neumann architecture.
Parallel processing
Due to the asynchronous nature of NNS, individual neurons can perform different operations simultaneously. Theoretically, neuromorphic devices can execute as many tasks as there are neurons at any given time. As such, neuromorphic architectures have immense parallel processing capabilities, allowing them to complete functions quickly.
Challenges of neuromorphic computing
Neuromorphic computing is still an emerging field. And like any technology in its early stages, neuromorphic systems face some challenges:
Decreased accuracy
The process of converting deep neural networks to spiking neural networks can result in a drop in accuracy. Furthermore, the memristors used in neuromorphic hardware can have cycle-to-cycle and device variations that can affect accuracy, as well as limits on synaptic weight values that can reduce accuracy .
Lack of benchmarks and standards
As a relatively new technology, neuromorphic computing lacks standards regarding architecture, hardware, and software. Neuromorphic systems also lack clearly defined and established benchmarks, sample data sets, test tasks, and metrics, making it difficult to evaluate performance and demonstrate effectiveness.
Limited accessibility and software
Most algorithmic approaches to neuromorphic computing still employ software designed for von Neumann hardware, which can limit results to what von Neumann architectures can achieve. Meanwhile, APIs (application programming interfaces) , coding models, and programming languages for neuromorphic systems have yet to be developed or widely available.
Steep learning curve
Neuromorphic computing is a complex field, drawing on disciplines such as biology, computer science, electrical engineering, mathematics, neuroscience, and physics. This makes it difficult to understand outside of an academic laboratory specializing in neuromorphic research.
Neuromorphic Computing Use Cases
Current real-world applications of neuromorphic systems are few, but the computing paradigm can possibly be applied in these use cases:Autonomous vehicles
Due to its high performance and orders-of-magnitude gains in energy efficiency, neuromorphic computing can help improve an autonomous vehicle’s navigation capabilities, enabling faster course correction and better collision avoidance while reducing energy emissions.Cybersecurity
Neuromorphic systems can help detect unusual patterns or activities that could indicate cyberattacks or breaches. These threats can be quickly thwarted due to the low latency and rapid computation of neuromorphic devices.Edge AI
The characteristics of neuromorphic architecture make it well-suited for edge AI . Its low power consumption can help with the short battery life of devices like smartphones and wearables, while its adaptability and event-driven nature suit the information processing methods of remote sensors, drones, and other Internet of Things (IoT) devices.Pattern recognition
Due to its extensive parallel processing capabilities, neuromorphic computing can be used in machine learning applications to recognize patterns in natural language and speech, analyze medical images, and process signals from brain MRIs and electroencephalogram (EEG) tests, which measure electrical activity in the brain.Robotics
As an adaptive technology, neuromorphic computing can be used to enhance a robot’s real-time learning and decision-making capabilities, helping it better recognize objects, navigate intricate factory layouts, and operate more quickly on an assembly line.
What are the applications of SNNs?
IV: These bioinspired networks are very well suited to processing biological signals: SNNs are successfully used for sound recognition. For example, they identify spoken numbers, but also the person reciting them. In the same vein, they read handwritten numbers well. They are also very effective at spotting sudden changes and anomalies in surveillance camera images. As part of the European Neuropuls project , we are also planning to test these systems for real-time anomaly detection for network security, but we are not there yet.
What is the Neuropuls project?
IV: Launched at the beginning of the year, Neuropuls is an ambitious and interdisciplinary project that brings together fifteen European academic and industrial partners and is led by the CNRS, via Fabio Pavanello, CNRS research fellow at the Institute of Microelectronics Electromagnetism Photonics & Hyperfrequency & Characterization Laboratory (IMEP-LaHC – CNRS/Université Grenoble Alpes/Université Savoie Mont-Blanc). It aims to create a computing node on a bioinspired photonic network, suitable for the Internet of Things. It will be connected to larger networks in order to solve problems currently reserved for deep neural networks. The emphasis is on security and low energy consumption.
The development of these accelerators will rely on emerging non-volatile materials, new photonic designs for computing architectures, and security primitives, as well as RISC-V-based interfaces and HW/SW security layers.
You recently co-authored an article reviewing the latest advances in neuromorphic computing. What are the highlights?
IV: It is divided into three parts, which form as many main ideas. First, we describe how photonic networks can be used in neuromorphic computing, where they reduce energy losses. Then, we explain the implementation of SNNs on Field Programmable Gate Arrays (FPGAs), a type of electronic chip that is now widely available. Politecnico di Torino has specialized in their use in neuromorphic computing.
Finally, we discuss in the article the different learning methods available with SNNs, as this technique offers a wide variety. It is therefore necessary to know how to choose the most suitable solution for each new situation, depending on their reliability and accuracy.
What are the major obstacles that research in neuromorphic computing must overcome?
IV: In SNNs, all neurons must be connected to all synapses. This principle offers great computing power, but when implemented in hardware, it causes significant signal deterioration. This hinders the scaling of SNNs so that they can tackle problems as complex as those addressed by deep neural networks.
Choosing different forms of learning and developing new strategies in SNN is also a thorny issue. Currently, the most efficient solutions in terms of computation time are generally not bio-inspired and consume a lot of energy.