Optical neural networks (ONNs) have gained significant attention as a promising neuromorphic framework due to their high parallelism, ultrahigh inference speeds, and low latency. However, the hardware implementation of ONN architectures has been limited by their high area overhead. These architectures have primarily focused on general matrix multiplication (GEMMs), resulting in unnecessarily large area costs and high control complexity. To address these challenges, we propose a hardware-efficient architecture for optical structured neural networks (OSNNs). Through experimental validation using an FPGA-based photonic-electronic testing platform, our neural chip demonstrates its effectiveness in on-chip convolution operations and image recognition tasks, which exhibits lower active component usage, reduced control complexity, and improved energy efficiency.
Optical neural network (ONN) is a promising platform for implementing deep learning tasks thanks to the critical features of light, such as high parallelism, low latency, and low power consumption. Previous ONN architectures are mainly composed of arrays of single-operand photonic devices such as Mach-Zehnder Interferometer (MZI) or microring resonator arrays. However, as the size of deep neural networks (DNNs) continues to grow, these ONNs will encounter unnecessary hardware costs, such as large chip areas and high power consumption.
In this work, we devise several compact customized multi-operand active photonic components for tensor operations, for example, multi-operand ring modulators, to reduce the hardware cost of optical AI accelerators. Furthermore, we propose ONN architectures based on these multi-operand active photonic components. Compared to previous ONNs based on single-operand MZI or microring arrays, our work uses fewer optical and electrical components to implement matrix multiplications with comparable task performance. Finally, we experimentally demonstrate the utility of our proposed ONN architectures based on multi-operand photonic devices in several deep learning tasks.
In the post Moore's era, conventional electronic digital computers have encountered escalating challenges in supporting massively parallel and energy-hungry artificial intelligence (AI) workloads, which raises a high demand for a revolutionary AI computing solution. Optical neural network (ONN) is a promising hardware platform that could represent a paradigm shift in efficient AI with its ultra-fast speed, high parallelism, and low energy consumption. In recent years, efforts have been made to facilitate the ONN design stack and push forward the practical application of optical neural accelerators.
In this paper, we present a holistic solution with state-of-the-art cross-layer co-design methodologies towards scalable, robust, and self-learnable integrated photonic neural accelerator designs across the circuit, architecture, and algorithm levels.
We will introduce (1) an area-efficient butterfly-style ONN architecture design beyond traditional general tensor units, (2) model-circuit co-optimization that boosts variation-tolerance and endurance of photonic in-memory computing, (3) efficient ONN on-chip training algorithms that enable self-learnable photonic AI engines, and (4) AI-assisted automated photonic integrated circuit (PIC) design methodology beyond manual PIC designs in footprint, expressivity, and noise-tolerance.
Our proposed ONN design stack is integrated into our open-source PyTorch-centric ONN library TorchONN to construct customized photonic AI engine designs and perform high-performance ONN training and optimization.
Micro-resonator modulators working in the critical coupling mode is usually sensitive to fabrication variation. We employ two modulated racetrack-resonators symmetrically coupled to a waveguide on two sides on a silicon-on-insulator wafer. The structure in the strong-coupling regime can perform modulation function stably through the interference of two racetracks. Fabrication variation (30~50%) of coupling constant or quality factor can be compensated by static voltage bias applied on the diode-embedded resonators. Detailed working principle and fabrication-variation compensation will be presented. Experimentally, we demonstrate 50 ~ 56 Gb/s modulation with high extinction ratio of 7.9 ~ 9.4 dB and high signal-to-noise ratio of 6~7, while maintaining low driving voltage (<2.5 Vpp) and small size (tens of microns).
The optical neural network (ONN) is a promising neuromorphic framework for implementing deep learning tasks thanks to the key features of light, such as high parallelism, low latency, and low power consumption. As the size of deep neural networks (DNNs) continues to grow, so do the training and control difficulties of the corresponding photonic hardware accelerators. Therefore, it is essential to reduce the complexity of ONNs while maintaining accuracy. Here we propose an ONN architecture based on structured neural networks to reduce the optical component utilization as well as the chip footprint. The model complexity of our proposed ONN can be further optimized by incorporating current DNN pruning strategies. Meanwhile, a hardware-aware on-chip training flow is also proposed to improve the learnability, trainability, and robustness of our architecture. Finally, we experimentally demonstrate the reliability of this architecture with a programmable photonic neural chip and benchmarked its performance on multiple datasets.
Deep neural networks (DNNs) have shown their superiority in a variety of complicated machine learning tasks. However, large-scale DNNs are computation- and memory-intensive, and significant efforts have been made to improve the efficiency of DNNs through the use of better hardware accelerators as well as software training algorithms. The optical neural network (ONN) is a promising candidate as a next-generation neurocomputing platform due to its high parallelism, low latency, and low energy consumption. Here, we devise a hardware-efficient optical neural network architecture named optical subspace neural network (OSNN), which targets lower optical component usage, area cost, and energy consumption of previous ONN architectures with comparable task performance. Additionally, a hardware-aware training framework is provided to minimize the required control precision, lessen the chip area, and boost the noise robustness.
This Conference Presentation, “Wavelength-division-multiplexing-based electronic-photonic integrated circuits for high-performance data processing and transportation,” was recorded for the Photonics West 2021 Digital Forum.
This Conference Presentation, “Scalable fast-Fourier-transform-based (FFT-based) integrated optical neural network for compact and energy-efficient deep learning,” was recorded for the Photonics West 2021 Digital Forum.
This Conference Presentation, "Wavelength-division-multiplexing-based electronic-photonic network for high-speed computing" was recorded at Photonics West 2020 held in San Francisco, California, United States.
As a tremendous amount of data is being created exponentially day by day, integrated optical computing starts to attract lots of attention recently due to the bottleneck in the continuation of Moore’s law. With the rapid development of micro/nano-scale optical devices, integrated photonics has shown its potential to satisfy the demand of computation with an ultracompact size, ultrafast speed, and ultralow power consumption. As one of the paradigms in optical computing, the electro-optic logic that combines the merits of photonics and electronics has made considerable progress in various fundamental logic gates. It therefore becomes very critical to develop an automated design method to synthesize these logic devices for large-scale optical computing circuits. In this paper, we propose a new automated logic synthesis algorithm based on And-Inverter Graphs (AIGs) for electro-optic computing. A comprehensive component library of electro-optic logic is summarized with several new proposed logic gates. As an example, a large-scale ripple-carry full adder which serves as the core part of the arithmetic logical unit (ALU) is presented. In the design, all the electrical signals could be applied simultaneously at every clock cycle and then the light could process the signals through every bit at the speed of light without any delay accumulated. High-speed experiment demonstrations are carried out, which show its potential in future high-speed and low-power-consumption optical computing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.