If electro-optic conversion of current photonic NNs could be postponed until the very end of the network, then the execution time is simply the photon time-of-flight delay. Here we discuss a first design and performance of an all-optical perceptron and feed-forward NN. Key is the dual-purpose foundry-approved heterogeneous integration of phase-change-materials resulting in a) volatile nonlinear activation function (threshold) realized with ps-short optical pulses resulting in a non-equilibrium variation of the materials permittivity, and b) thermo-optically writing a non-volatile optical multi-cell (5-bit) memory for the NN weights after being (offline) trained. Once trained, the weights only required a rare update, thus saving power. Performance wise, such an integrated all-optical NN is capable of < fJ/MAC using experimental demonstrated pump-probe [Waldecker et al, Nat. Mat. 2015] with a delay per perceptron being ~ps [Miscuglio et al. Opt.Mat.Exp. 2018] has a high cascadability.
Photonic neural networks (PNN) are a promising alternative to electronic GPUs to perform machine-learning tasks. The PNNs value proposition originates from i) near-zero energy consumption for vector matrix multiplication once trained, ii) 10-100 ps short interconnect delays, iii) weak required optical nonlinearity to be provided via fJ/bit efficient emerging electrooptic devices. Furthermore, photonic integrated circuits (PIC) offer high data bandwidth at low latency, with competitive footprints and synergies to microelectronics architectures such as foundry access. This talk discusses recent advances in photonic neuromorphic networks and provides a vision for photonic information processors. Details include, 1) a comparison of compute performance technologies with respect to compute efficiency (i.e. MAC/J) and compute speed (i.e. MAC/s), 2) a discussion of photonic neurons, i.e. perceptrons, 3) architectural network implementations, 4) a broadcast-and-weight protocol, 5) nonlinear activation functions provided via electro-optic modulation, and 6) experimental demonstrations of early-stage prototypes. The talk will open up answering why neural networks are of interest, and concludes with an application regime of PNN processors which reside in deep-learning, nonlinear optimization, and real-time processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.