KEYWORDS: Data storage, Computing systems, Design and modelling, Control systems, Convolution, Field programmable gate arrays, Reconfigurable computing, Data modeling, Convolutional neural networks, Neural networks
In this paper, we propose a reconfigurable framework optimized for resource-constrained platforms to accelerate CNNs using the high concurrency and data-proximate characteristics of edge computing devices. The framework is designed from three aspects: control flow, data flow, and storage flow. To address the impact of memory access cost on network computation efficiency, we introduce a parallel ping-pong data scheduling framework to compensate for it. Experimental results show that the system can support convolutional kernel operations of any size under structural constraints, and the computational efficiency is improved 2.28 times compared to CNN acceleration systems based on traditional data scheduling schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.