Recent years have seen the increasing application of deep learning methods in medical imaging formation, processing, and analysis. These methods take advantage of the flexibility of nonlinear neural network models to encode information and features in ways that can outperform conventional approaches. However, because of the nonlinear nature of this processing, images formed by neural networks have properties that are highly datadependent and difficult to analyze. In particular, the generalizability and robustness of these approaches can be difficult to ascertain. In this work, we analyze a class of neural networks that use only piece-wise linear activation functions. This class of networks can be represented by locally linear systems where the linear properties are highly data-dependent - allowing, for example, estimation of noise in image output via standard propagation methods. We propose a nonlinearity index metric that quantifies the fidelity of a local linear approximation of trained models based on specific input data. We applied this analysis to three example CT denoising CNNs to analytically predict the noise properties in the output images. We found that the proposed nonlinearity metric highly correlates with the accuracy of noise predictions. The analysis proposed in this work provides theoretical understanding of the nonlinear behavior of neural networks and enables performance prediction and quantitation under certain conditions.
|