Photo Credit: CNRI/SPL

A key benchmark in the development of advanced computing devices designed to mimic biological systems took place this month, explaining that superconducting computing chips modelled after neurons can process information faster and more efficiently than the human brain. 

According to Nature, this could open the door to more natural machine-learning software, although many obstacles stand in the way. 

Artificial intelligence software has begun to intimate the brain more and more, but because conventional hardware was not designed to run brain-like algorithms, machine learning tasks require orders of magnitude more computing power than the human brain does. 

“There must be a better way to do this, because nature has figured out a better way to do this,” explained Michael Schneider, a physicist at the US National Institute of Standards and Technology (NIST) in Boulder, Colorado, and a co-author of the study. 

NIST is one of a few groups who are trying to develop ‘neuromorphic’ hardware that mimics the human brain in the hope that it will run brain-like software more efficiently. Neuromorphic devices can collect small amounts of information from multiple sources, alter it to produce a different type of signal and fire a burst of electricity only when needed, just like biological neurons. As a result, neuromorphic devices require less energy to run. 

However, these devices are still inefficient, especially when they transmit information across the gap, or synapse, between transistors. In a bid to change this, Schneider’s team created neuron-like electrodes out of niobium superconductors, which conduct electricty without resistance. They filled the gaps between the superconductors witht housands of nanoclusters of magnetic managanese. 

The nanoclusters can be aligned to point in different directions by varying the amoutn of magentic field in the synapse. In turn, this allows the syste, to encode information in both the level of electricty and in the direction of magentism faster than human neurons – and use one ten-thousandth of the amount of energy used by a biological synapse.

Within computer simulations, the synthetic neurons could collate input from up to nine sources before passing it on to the next electrode. However, millions of synapses would be necessary before a system based on the technology could be used for complex computing, explains Scheider, and it remains to be seen whether it will be possible to scale it to this level. 

In addition, another problem is that the synapses can only operate at temperatures close to absolute zero, and need to be cooled with liquid helium. 

Speaking about the research, Carver Mead, an electrical engineer at the California Institute of Technology in Pasadena, praises the approach, calling it a fresh approach to neuromorphic computing. “The field’s full of hype, and it’s nice to see quality work presented in an objective way,” he commented. 

Although he does mention that it could be a long time before the chips could be used for real computing, and highlights that they face tough competition from the many other neuromorphic computing devices under development. 

Steven Furber, a computer engineer at University of Manchester, UK, who studies neuromorphic computing, stresses that these practical applications are very far in the future. 

“The device technologies are potentially very interesting, but we don’t yet understand enough about the key properties of the biological synapse to know how to use them effectively,” he added. 

Furber explains that it takes 10 years or more for new computing devices to reach the marker, so believe it is worth developing as many different technological approaches as possible, even as neuroscientists struggle to understand the human brain.