Dropout is proved to have a good ability of controlling overfitting and improving deep networks’ generalization. However, dropout adopts a constant rate to train the parameters of each layer, reducing the classification accuracy and efficiency. Aiming at this problem, the paper proposes a dropout rate distribution model by analyzing the relationship between the dropout rate and the layers of the deep network. First, we gave the formal description of the dropout rate to reveal the relationship between the dropout rate and the layers of the deep network. Second, we proposed a distribution model for determining the dropout rate in each layer training. Experiments are performed on MNIST and CIFAR-10 datasets to evaluate the performance of the proposed model by comparison with networks of constant dropout rates. Experimental results demonstrate that our proposed model performs better than the conventional dropout in classification accuracy and efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.