On-chip photonic-neural-network processors have potential benefits in both speed and energy efficiency but have not yet reached the scale to compete with electronic processors. The dominant paradigm is to build integrated-photonic processors using relatively bulky discrete components connected by single-mode waveguides. A far more compact alternative is to avoid explicitly defining any components and instead sculpt the continuous substrate of the photonic processor to directly perform the computation using waves freely propagating in two dimensions. In this talk, I will present our recent work [1] on experimentally realizing this approach with a device whose refractive index as a function of space, n(x,z), can be rapidly reprogrammed. This device combines photoconductive gain with the electro-optic effect in a lithium niobate slab waveguide. Using this device, we performed neural-network inference with up to 49-dimensional input vectors in a single pass.
[1]: T. Onodera*, M.M. Stein*, et al. arXiv:2402.17750 (2024)
On-chip photonic-neural-network processors promise benefits in both speed and energy efficiency but have not yet reached the scale to compete with electronic processors. The dominant paradigm is to build integrated-photonic processors using discrete components connected by single-mode waveguides. A far more compact alternative is to avoid discrete components and instead sculpt a complex and continuous microphotonic medium in which computations are performed by multimode waves controllably propagating in two dimensions. We show our realization of this approach with a device whose refractive index as a function of space can be rapidly reprogrammed. We demonstrate optical computations much larger and more error-resilient than previous photonic chips relying on discrete components. We argue that beyond photonic-neural-network processors, devices with such arbitrarily programmable index distributions enable the realization of a wide range of photonic functionality.
We report on the realization of an on-chip waveguide platform capable of creating arbitrary two-dimensional refractive index profiles in situ and in real-time. The device exhibits complex multimode dynamics which we train to perform machine learning. We tune the refractive index profile in situ using a backpropagation algorithm to perform audio and image classification with up to 50-dimensional inputs. The two-dimensional programmability is realized by sandwiching a photoconductive film and a lithium niobate slab waveguide between two flat electrodes. While applying voltage between the electrodes, we program the effective index of the waveguide by projecting different light patterns onto the photoconductive film. The effective index increases by 10^-3 in illuminated regions via the electro-optic effect, free from any measurable memory effects or cyclic degradation. In conclusion, we developed a photonics platform with versatile spatial programmability that opens new avenues for optical computing and photonic inverse-design.
I will overview our work on analog neural networks based on photonics and other controllable physical systems. I will show how backpropagation can efficiently train physical neural networks (PNNs), and how to design physical network architectures for physics-based machine learning. I will review our work showing how nonlinear photonic neural networks may enhance computational sensing and how photonic neural networks may be operated robustly deep into low-energy regimes where quantum noise would ordinarily be a limiting factor. Finally, I will show that PNNs offer fundamental advantages for scaling AI models such as Transformers.
Photonic neural networks have been developed as a hardware platform to accelerate machine-learning inference. Digital micromirror devices (DMDs) have been playing a critical role in developing a variety of photonic neural networks for their ability to manipulate millions of optical spatial modes in a 2D plane at >10 kHz frame rate. DMDs have not only enabled high-throughput machine-learning inference but also made hardware-in-the-loop training possible with photonic neural networks. In this talk, we will review the functions of DMDs in a plethora of photonic-neural-network architectures and discuss how MEMS-based technologies can enable novel photonic neural networks in future.
Utilizing the input-output transformation of ultrafast nonlinear pulse propagation in quadratic media, we experimentally construct a multilayer physical neural network to perform both audio and handwritten image classification. We introduce a hybrid in-situ in-silico backpropagation algorithm, physics-aware training, that is resilient to the simulation-reality gap, to train physical neural networks. The methodology for constructing and training physical neural networks applies to generic complex physical systems. To demonstrate its generality, we also built and trained physical neural networks out of analog electronic circuits and multimode mechanical oscillators to perform image classification.
Linear optics has been long applied to image compression. However, it is widely known that nonlinear neural networks outperform linear models in terms of feature extraction and image compression. Here, we show a nonlinear multilayer optical neural network using a commercially available image intensifier as a scalable optical-to-optical nonlinear activation function. We experimentally demonstrated that nonlinear ONNs outperform linear optical linear encoders in a variety of non-trivial machine vision tasks at a high image compression ratio (up to 800:1). We have shown that nonlinear ONNs can directly process optical inputs from physical objects under natural illumination, which provides a new pathway towards high-volume, high-throughput, and low-latency machine vision processing.
In conventional approaches to computer-vision tasks such as object recognition, a camera digitally records a high-resolution image and an algorithm is run to extract information from the image. Alternative image-sensing schemes have been proposed that extract high-level features from a scene using optical components, filtering out irrelevant information ahead of conversion from the optical to electronic domains by an array of detectors (e.g., a CMOS image sensor). In this way, images are compressed into a low-dimensional latent space, allowing computer-vision applications to be realized with fewer detectors, fewer photons, and reduced digital post-processing, which enables low-latency processing. Optical neural networks (ONNs) offer a powerful platform for such image compression/feature extraction in the analog, optical domain. While ONNs have been successfully implemented using only linear operations, which can still be leveraged for computer-vision applications, it is well known that adding nonlinearity (a prerequisite for depth) enables neural networks to perform more complex processing. Our work realizes a multilayer ONN preprocessor for image sensing, using a commercial image intensifier as an optoelectronic, optical-to-optical nonlinear activation function. The nonlinear ONN preprocessor achieves compression ratios up to 800:1. At high compression ratios, the nonlinear ONN outperforms any linear preprocessor in terms of classification accuracy on a variety of tasks. Our experiments demonstrate ONN image sensors with incoherent light, but emerging technologies such as metasurfaces, large-scale laser arrays, and novel optoelectronic materials, will provide the means to realize a variety of multilayer ONN preprocessors that act on coherent and/or quantum light.
Drosophila is an important model animal to study connectomics since its brain is complicated and small enough to be mapped by optical microscopy with single-cell resolution. Compared to other model animals, its genetic toolbox is more sophisticated, and a connectome map with single-cell resolution has been established, serving as an invaluable reference for functional connectome study. Two-photon microscopy (2PM) is now the most popular tool to study functional connectome by taking the advantages of low photobleaching, subcellular resolution and deep penetration depth. However, using GFP-labeling with excitation wavelength ~ 920-nm, the reported penetration depths in a living Drosophila brain are limited to ~ 100-μm, which are much smaller than that in living mouse or zebrafish brains. The underlying reason is air vessels, i.e., trachea, instead of blood vessels, are responsible for oxygen exchange in Drosophila brains. The trachea structures induce extraordinarily strong scattering and aberration since the air/tissue refractive index difference is much larger than blood/tissue. By expelling the air inside trachea, whole Drosophila brain can be penetrated by 2PM without difficulty. However, the Drosophila is not alive anymore. Here, three-photon microscopy based on a 1300-nm laser is demonstrated to penetrate a living Drosophila brain with single-cell resolution. The long wavelength intrinsically reduces scattering, when combined with normal dispersion of brain tissue, aberration from trachea/tissue interface is reduced to some extent. As a result, the penetration depth is improved more than twice using 1300-nm excitation. This technique is believed to significantly contribute on functional connectome studies in the future.
We demonstrate three-photon microscopy (3PM) of mouse cerebellum at 1 mm depth by imaging both blood vessels and neurons. We compared 3PM and 2PM in the mouse cerebellum for imaging green (using excitation sources at 1300 nm and 920 nm, respectively) and red fluorescence (using excitation sources at 1680 nm and 1064 nm, respectively). 3PM enabled deeper imaging than 2PM because the use of longer excitation wavelength reduces the scattering in biological tissue and the higher order nonlinear excitation provides better 3D localization. To illustrate these two advantages quantitatively, we measured the signal decay as well as the signal-to-background ratio (SBR) as a function of depth. We performed 2-photon imaging from the brain surface all the way down to the area where the SBR reaches ~ 1, while at the same depth, 3PM still has SBR above 30. The segmented decay curve shows that the mouse cerebellum has different effective attenuation lengths at different depths, indicating heterogeneous tissue property for this brain region. We compared the third harmonic generation (THG) signal, which is used to visualize myelinated fibers, with the decay curve. We found that the regions with shorter effective attenuation lengths correspond to the regions with more fibers. Our results indicate that the widespread, non-uniformly distributed myelinated fibers adds heterogeneity to mouse cerebellum, which poses additional challenges in deep imaging of this brain region.
Rett Syndrome (RTT) is a pervasive, X-linked neurodevelopmental disorder that predominantly affects girls. It is mostly caused by a sporadic mutation in the gene encoding methyl CpG-binding protein 2 (MeCP2).The clinical features of RTT are most commonly reported to emerge between the ages of 6-18 months and implicating RTT as a disorder of postnatal development. However, a variety of recent evidence from our lab and others demonstrates that RTT phenotypes are present at the earliest stages of brain development including neurogenesis, migration, and patterning in addition to stages of synaptic and circuit development and plasticity. We have used RTT patient-derived induced pluripotent stem cells to generate 3D human cerebral organoids that can serve as a model for human neurogenesis in vitro. We aim to expand on our existing findings in order to determine aberrancies at individual stages of neurogenesis by performing structural and immunocytochemical staining in isogenic control and MeCP2-deficient organoids. In addition, we aim to use Third Harmonic Generation (THG) microscopy as a label-free, nondestructive 3D tissue visualization method in order to gain a complete understanding of the structural complexity that underlies human neurogenesis.
As a proof of concept, we have performed THG imaging in healthy intact human cerebral organoids cleared with SWITCH. We acquired an intrinsic THG signal with the following laser configurations: 400 kHz repetition rate, 65 fs pulse width laser at 1350 nm wavelength. In these THG images, nuclei are clearly delineated and cross sections demonstrate the depth penetration capacity (< 1mm) that extends throughout the organoid. Imaging control and MeCP2-deficient human cerebral organoids in 2D sections reveals structural and protein expression-based alterations that we expect will be clearly elucidated via both THG and three-photon fluorescence microscopy.
KEYWORDS: Neurons, Multiphoton microscopy, Microscopy, Calcium, Signal to noise ratio, In vivo imaging, Neuroimaging, Brain, Brain imaging, Deep tissue imaging, Signal attenuation, Functional imaging, Luminescence, Surface plasmons
We demonstrate that three-photon microscopy (3PM) with 1300-nm excitation enables functional imaging of GCaMP6s labeled neurons beyond the depth limit of two-photon microscopy (2PM) with 920-nm excitation. We quantitatively compared 2PM and 3PM imaging of calcium indicator GCaMP6s by measuring correlation between activity traces, absolute signal level, excitation attenuation with depth, and signal-to-background ratio (SBR). Compared to 2PM imaging of GCaMP6s-labeled neurons, 3PM imaging has increasingly larger advantages in signal strength and SBR as the imaging depth increases in densely labeled mouse brain, given the same pulse energy, pulse width, and repetition rate at the sample surface. For example, 3PM has comparable signal strength as 2PM and up to two orders of magnitude higher SBR as 2PM in mouse cortex around 700-800um. We also demonstrate 3PM activity recording of 150 neurons in the hippocampal stratum pyramidale (SP) at 1mm depth, which is inaccessible to non-invasive 2PM imaging. Our work establishes 3PM as a powerful tool for calcium imaging at the depth beyond 2PM limits.
Multiphoton fluorescence microscopy is a well-established technique for deep-tissue imaging with subcellular resolution. Three-photon microscopy (3PM) when combined with long wavelength excitation was shown to allow deeper imaging than two-photon microscopy (2PM) in biological tissues, such as mouse brain, because out-of-focus background light can be further reduced due to the higher order nonlinear excitation. As was demonstrated in 2PM systems, imaging depth and resolution can be improved by aberration correction using adaptive optics (AO) techniques which are based on shaping the scanning beam using a spatial light modulator (SLM). In this way, it is possible to compensate for tissue low order aberration and to some extent, to compensate for tissue scattering. Here, we present a 3PM AO microscopy system for brain imaging. Soliton self-frequency shift is used to create a femtosecond source at 1675 nm and a microelectromechanical (MEMS) SLM serves as the wavefront shaping device. We perturb the 1020 segment SLM using a modified nonlinear version of three-point phase shifting interferometry. The nonlinearity of the fluorescence signal used for feedback ensures that the signal is increasing when the spot size decreases, allowing compensation of phase errors in an iterative optimization process without direct phase measurement. We compare the performance for different orders of nonlinear feedback, showing an exponential growth in signal improvement as the nonlinear order increases. We demonstrate the impact of the method by applying the 3PM AO system for in-vivo mouse brain imaging, showing improvement in signal at 1-mm depth inside the brain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.