As the field of deep learning continues to expand, it has become increasingly important to develop energy-efficient hardware that can adapt to these advances. However, achieving learning on a chip requires the use of algorithms that are compatible with hardware and can be implemented on imperfect devices. One promising training technique is Equilibrium Propagation, which was introduced in 2017 by Yoshua Bengio. This approach provides gradient estimates based on a spatially local learning rule, making it more biologically plausible and better suited for hardware than backpropagation. However, the mathematical equations of this algorithm cannot be directly transposed to a physical system. In this study, Equilibrium Propagation algorithm is adapted to the use of a real physical system, and its potential application to spintronics devices is discussed.
As deep learning continues to grow, developing adapted energy-efficient hardware becomes crucial. Learning on a chip requires hardware-compatible learning algorithms and their realization with physically imperfect devices. Equilibrium Propagation is a training technique introduced in 2017 by Yoshua Bengio which gives gradient estimates based on a spatially local learning rule, making it both more biologically plausible and more hardware compatible than backpropagation. This work uses the Equilibrium Propagation algorithm to train a neural network with hardware-in-the-loop simulations using hafnium oxide memristor synapses. Realizing this type of learning with imperfect and noisy devices paves the way for on-chip learning at very low energy.
Neuromorphic computing takes inspiration from the brain to create energy-efficient hardware for information processing, capable of highly sophisticated tasks. Systems built with standard electronics achieve gains in speed and energy by mimicking the distributed topology of the brain. Scaling-up such systems and improving their energy usage, speed and performance by several orders of magnitude requires a revolution in hardware. We discuss how including more physics in the algorithms and nanoscale materials used for data processing could have a major impact in the field of neuromorphic computing. We review striking results that leverage physics to enhance the computing capabilities of artificial neural networks, using resistive switching materials, photonics, spintronics and other technologies. We discuss the paths that could lead these approaches to maturity, towards low-power, miniaturized chips that could infer and learn in real time.
In this work , we describe the design, realisation and characterization of the magnetic version of the Galton Board, an archetypal statistical device originally designed to exemplify normal distributions. Although simple in its macroscopic form, achieving an equivalent nanoscale system poses many challenges related to the generation of sufficiently similar nanometric particles and the strong influence that nanoscale defects can have in the stochasticity of random processes. We demonstrate how the quasi-particle nature and the chaotic dynamics of magnetic domain-walls can be harnessed to create nanoscale stochastic devices [1]. Furthermore, we show how the direction of an externally applied magnetic field can be employed to controllably tune the probability distribution at the output of the devices, and how the removal of elements inside the array can be used to modify such distribution.
For numerous Radio-Frequency applications such as medicine, RF fingerprinting or radar classification, it is important to be able to apply Artificial Neural Network on RF signals. In this work we show that it is possible to apply directly Multiply-And-Accumulate operations on RF signals without digitalization, thanks to Magnetic Tunnel Junctions (MTJs). These devices are similar to the magnetic memories already industrialized and compatible with CMOS.
We show experimentally that a chain of these MTJs can rectify simultaneously different RF signals, and that the synaptic weight encoded by each junction can be tune with their resonance frequency.
Through simulations we train a layer of these junctions to solve a handwritten digit dataset. Finally, we show that our system can scale to multi-layer neural networks using MTJs to emulate neurons.
Our proposition is a fast and compact system that allows to receive and process RF signals in situ and at the nanoscale.
Recently, there has been impressive progress in the field of artificial intelligence. A striking example is Alphago, an algorithm developed by Google, that defeated the world champion Lee Sedol at the game of Go. However, in terms of power consumption, the brain remains the absolute winner, by four orders of magnitudes. Indeed, today, brain inspired algorithms are running on our current sequential computers, which have a very different architecture than the brain. If we want to build smart chips capable of cognitive tasks with a low power consumption, we need to fabricate on silicon huge parallel networks of artificial synapses and neurons, bringing memory close to processing. The aim of the presented work is to deliver a new breed of bio-inspired magnetic devices for pattern recognition. Their functionality is based on the magnetic reversal properties of an artificial spin ice in a Kagome geometry for which the magnetic switching occurs by avalanches.
Spin torque magnetic memory (ST-MRAM) is currently under intense academic and industrial development, as it features non-volatility, high write and read speed and high endurance. However, one of its great challenge is the probabilistic nature of programming magnetic tunnel junctions, which imposes significant circuit or energy overhead for conventional ST-MRAM applications. In this work, we show that in unconventional computing applications, this drawback can actually be turned into an advantage. First, we show that conventional magnetic tunnel junctions can be reinterpreted as stochastic “synapses” that can be the basic element of low-energy learning systems. System-level simulations on a task of vehicle counting highlight the potential of the technology for learning systems. We investigate in detail the impact of magnetic tunnel junctions’ imperfections. Second, we introduce how intentionally superparamagnetic tunnel junctions can be the basis for low-energy fundamentally stochastic computing schemes, which harness part of their energy in thermal noise. We give two examples built around the concepts of synchronization and Bayesian inference. These results suggest that the stochastic effects of spintronic devices, traditionally interpreted by electrical engineers as a drawback, can be reinvented as an opportunity for low energy circuit design.
In the last five years, Artificial Intelligence has made striking progress, and now defeating humans at subtle strategy games, such as Go, and even Poker. However, these algorithms are running on traditional processors which have a radically different architecture than the biological neural networks they are inspired from. This considerably slows them down and requires massive amounts of electrical power, more than ten thousand times what the brain typically need to function. This energy dissipation is not only becoming an environmental issue, but it also sets a limit to the size of neural networks that can be simulated. We are at a point where we need to rethink the way we compute, and build hardware chips directly inspired from the architecture of the brain. This is a challenge. Indeed, contrarily to current electronic systems, the brain is a huge parallel network closely entangling memory and processing.
In this talk, I will show that, for building the neuromorphic chips of the future, we will need to emulate functionalities of synapses and neurons at the nanoscale. I will review the recent developments of memristive nano-synapses and oscillating nano-neurons, the physical mechanisms at stake, and the challenges in terms of materials. Finally, I will present the first achievements of neuromorphic computing with novel nanodevices and the fascinating perspectives of this emerging field.
The rich physics of spin transfer nano-oscillators (STNO) has provoked a huge interest to create a new generation of multi-functional microwave spintronic devices [1]. It has been often emphasized that their nonlinear behavior gives a unique opportunity to tune their radiofrequency (rf) properties but at the cost of large phase noise, not compatible with practical applications. To tackle this issue as well as to open the opportunities to new developments for non-boolean computations [1], one strategy is to use electrical synchronization of STOs through the rf current. Thereby, it is crucial to understand how the synchronization forces transmitted through the electric current. In this talk, we will first present the results of an experimental study showing the self-synchronization of STNO by re-injecting its rf current after a certain delay time [2]. In the second part, we demonstrate that the synchronization of two vortex-STNOs connected in parallel can be tuned either by an artificial delay or by the spin transfer torques [3]. The synchronization of spin-torque oscillators, combined with the drastic improvement of the rf-features (linewidth decreases by a factor of 2 and power increases by a factor of 4) in the synchronized state, marks an important milestone towards a new generation of rf-devices based on STNO.
The authors acknowledge the financial support from ANR agency (SPINNOVA: ANR-11-NANO-0016) and EU grant (MOSAIC: ICT-FP7-317950).
[1] N. Locatelli, V. Cros, and J. Grollier, Nat Mater 13, 11 (2014).
[2] S. Tsunegi et al., arXiv:1509.05583 (2015)
[3] R. Lebrun et al., arXiv:1601.01247 (2016)
The brain displays many features typical of non-linear dynamical networks, such as synchronization or chaotic behaviour. These observations have inspired a whole class of models that harness the power of complex non-linear dynamical networks for computing. In this framework, neurons are modeled as non-linear oscillators, and synapses as the coupling between oscillators. These abstract models are very good at processing waveforms for pattern recognition or at generating precise time sequences useful for robotic motion. However there are very few hardware implementations of these systems, because large numbers of interacting non-linear oscillators are indeed. In this talk, I will show that coupled spin-torque nano-oscillators are very promising for realizing cognitive computing at the nanometer and nanosecond scale, and will present our first results in this direction.
We propose an experimental scheme to determine the spin-transfer torque efficiency excited by the spin-orbit interaction in ferromagnetic bilayers from the measurement of the longitudinal magnetoresistace. Solving a diffusive spin-transport theory with appropriate boundary conditions gives an analytical formula of the longitudinal charge current density. The longitudinal charge current has a term that is proportional to the square of the spin-transfer torque efficiency and that also depends on the ratio of the film thickness to the spin diffusion length of the ferromagnet. Extracting this contribution from measurements of the longitudinal resistivity as a function of the thickness can give the spin-transfer torque efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.