The recent explosive compute growth, mainly fueled by the boost of artificial intelligence (AI) and deep neural networks (DNNs), is currently instigating the demand for a novel computing paradigm that can overcome the insurmountable barriers imposed by conventional electronic computing architectures. Photonic neural networks (PNNs) implemented on silicon photonic integration platforms stand out as a promising candidate to endow neural network (NN) hardware, offering the potential for energy efficient and ultra-fast computations through the utilization of the unique primitives of light i.e. THz bandwidth, low-power and low-latency. Thus far, several demonstrations have revealed the huge potential of PNNs in performing both linear and non-linear NN operations at unparalleled speed and energy consumption metrics. Transforming this potential into a tangible reality for Deep Learning (DL) applications requires, however, a deep understanding of the basic PNN principles, requirements and challenges across all constituent architectural, technological and training aspects. In this paper we review the state-of-the-art photonic linear processors and project their challenges and solutions for future photonic-assisted machine learning engines. Additionally, recent experimental results using SiGe EAMs in a Xbar layout are presented, validating light's credentials to perform ultra-fast linear operations with unparalleled accuracy. Finally, we provide an holistic overview of the optics-informed NN training framework that incorporates the physical properties of photonic building blocks into the training process in order to improve the NN classification accuracy and effectively elevate neuromorphic photonic hardware into high-performance DL computational settings.
Integrated photonic computing promises revolutionary strides in processing power, energy efficiency, and speed, propelling us into an era of unprecedented computational capabilities. By harnessing the innate properties of light, such as high-speed propagation, inherent parallel processing capabilities, and the ability to carry vast amounts of information, photonic computing transcends the limitations of traditional electronic architectures. Furthermore, silicon photonic neural networks hold promise to transform artificial intelligence by enabling faster training and inference with significantly reduced power consumption. This potential leap in efficiency could revolutionize data centers, high-performance computing, and edge computing, minimizing environmental impact while expanding the boundaries of computational possibilities. The latest research on our silicon photonic platform for next-generation optical compute accelerators will be presented and discussed.
Photonic Neural Networks (PNNs) implemented on silicon photonic (SiPho) platforms stand out as a promising candidate to endow neural network hardware, offering the potential for energy efficient and ultra-fast computations through exploiting the unique primitives of light i.e., THz bandwidth, low-power and low-latency. In this paper, we review the state-of-the-art photonic linear processors discuss their challenges and propose solutions for future photonic-assisted machine learning engines. Additionally, we will present experimental results on the recently introduced SiPho 4x4 coherent crossbar (Xbar) architecture, that migrates from existing Singular Value Decomposition (SVD)-based schemes while offering single time-step programming complexity. The Xbar architecture utilizes silicon germanium (SiGe) Electro-Absorption Modulators (EAMs) as its computing cells and Thermo-Optic (TO) Phase Shifters (PS) for providing the sign information at every weight matrix node. Towards experimentally evaluating our Xbar architecture, we performed 10,024 arbitrary linear transformations over the SiPho processor, with the respective fidelity values converging to 100%. Followingly, we focus on the execution of the non-linear part of the NN by demonstrating a programmable analog optoelectronic circuit that can be configured to provide a plethora of non-linear activation functions, including tanh, sigmoid, ReLU and inverted ReLU at 2 GHz update rate. Finally, we provide a holistic overview on optics-informed neural networks towards improving the classification accuracy and performance of optics-specific Deep Learning (DL) computational tasks by leveraging the synergy of optical physics and DL.
Rapid developments in computer science have led to the increasing demand for efficient computing systems. Linear photonic systems rose as a favorable candidate for workload-demanding architectures, due to their small footprint and low energy consumption. Mach Zehnder Interferometers (MZI) serve as the foundational building block for several photonic circuits, and have been widely used as modulators, switches and variable power splitters. However, combining MZIs for realizing multiport splitters remains a challenge, since the exponential increase in the number of devices and the consequential increase in losses is limiting the performance of the MZI based multiport device. To overcome such limitations, incorporating alternative and low loss integration platforms combined with a generalized design of the MZI could allow the realization of a robust variable power splitter. In this work, we present for the first time a 4×4 Generalized Mach Zehnder Interferometer (GMZI) incorporated on a Si3N4 photonic integration platform and we experimentally demonstrate its operation as a variable power splitter. We developed an analytical model to describe the operation of the 4×4 GMZI, allowing us to evaluate the impact of several parameters to the overall performance of the device and investigate the device’s tolerance to fabrication imperfections and design alternations. Its experimental evaluation as a variable power splitter reveals a controlled imbalance that ranges up to 10 dB in multiple output ports of the device, validating the theoretically derived principles of operation.
The explosive volume growth of deep-learning (DL) applications has triggered an era in computing, with neuromorphic photonic platforms promising to merge ultra-high speed and energy efficiency credentials with the brain-inspired computing primitives. The transfer of deep neural networks (DNNs) onto silicon photonic (SiPho) architectures requires, however, an analog computing engine that can perform tiled matrix multiplication (TMM) at line rate to support DL applications with a large number of trainable parameters, similar to the approach followed by state-of-the-art electronic graphics processing units. Herein, we demonstrate an analog SiPho computing engine that relies on a coherent architecture and can perform optical TMM at the record-high speed of 50 GHz. Its potential to support DL applications, where the number of trainable parameters exceeds the available hardware dimensions, is highlighted through a photonic DNN that can reliably detect distributed denial-of-service attacks within a data center with a Cohen’s kappa score-based accuracy of 0.636.
The emergence of demanding machine learning and AI workloads in modern computational systems and Data Centers (DC) has fueled a drive towards custom hardware, designed to accelerate Multiply-Accumulate (MAC) operations. In this context, neuromorphic photonics have recently attracted attention as a promising technological candidate, that can transfer photonics low-power, high bandwidth credentials in neuromorphic hardware implementations. However, the deployment of such systems necessitates progress in both the underlying constituent building blocks as well as the development of deep learning training models that can take into account the physical properties of the employed photonic components and compensate for their non-ideal performance. Herein, we present an overview of our progress in photonic neuromorphic computing based on coherent layouts, that exploits the phase of the light traversing the photonic circuitry both for sign representation and matrix manipulation. Our approach breaks-through the direct trade-off of insertion loss and modulation bandwidth of State-Of-The-Art coherent architectures and allows high-speed operation in reasonable energy envelopes. We present a silicon-integrated coherent linear neuron (COLN) that relies on electro-absorption modulators (EAM) both for its on-chip data generation and weighting, demonstrating a record-high 32 GMAC/sec/axon compute linerate and an experimentally obtained accuracy of 95.91% in the MNIST classification task. Moreover, we present our progress on component specific neuromorphic circuitry training, considering both the photonic link thermal noise and its channel response. Finally, we present our roadmap on scaling our architecture using a novel optical crossbar design towards a 32×32 layout that can offer >;32 GMAC/sec/axon computational power in ~0.09 pJ/MAC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.