Reverse mode automatic differentiation (RMAD) is widely used in deep learning training due to its runtime
being independent of the number of training parameters. However, RMAD is limited by its high memory
consumption, storing every intermediate value and operation, making it incompatible with commonly employed time-stepping finite difference time domain (FDTD) electromagnetic simulators. To address this issue, a differentiable FDTD simulator is proposed that exploits the time-reversal properties of Maxwell’s equations and removes redundant operations at each timestep, resolving the memory bottleneck. This approach enables the efficient calculation of high-dimensional objective function gradients, expanding the applicability of inverse-design topology optimization.
|