The optical neural network (ONN) is a promising neuromorphic framework for implementing deep learning tasks thanks to the key features of light, such as high parallelism, low latency, and low power consumption. As the size of deep neural networks (DNNs) continues to grow, so do the training and control difficulties of the corresponding photonic hardware accelerators. Therefore, it is essential to reduce the complexity of ONNs while maintaining accuracy. Here we propose an ONN architecture based on structured neural networks to reduce the optical component utilization as well as the chip footprint. The model complexity of our proposed ONN can be further optimized by incorporating current DNN pruning strategies. Meanwhile, a hardware-aware on-chip training flow is also proposed to improve the learnability, trainability, and robustness of our architecture. Finally, we experimentally demonstrate the reliability of this architecture with a programmable photonic neural chip and benchmarked its performance on multiple datasets.
|