Non-uniform motion blur, including effects commonly encountered in blur associated with atmospheric turbulence, can be estimated as a superposition of locally linear uniform blur kernels. Linear uniform blur kernels are modeled using two parameters, length and angle. In recent work, we have demonstrated the use of a regression-based Convolutional Neural Network (CNN) for robust blind estimation of the length and angle blur parameters of linear uniform blur kernels. In this work we extend the approach of regression-based CNNs to analyze patches in images and estimate the parameters of a locally-linear motion blur kernel, allowing us to model the blur field. We analyze the effectiveness of this patch-based approach versus patch size for two problems: synthetic images generated as a superposition of locally linear blurs, and synthetic images generated with a Zernike polynomial-based wavefront distortion applied at the pupil plane.
|