Convolution Neural Network (CNN) models have demonstrated remarkable success for many applications including computer vision in recent years. However, the increase in accuracy achieved so far is at the cost of memory and complex computations. The CNN models are growing deeper and wider making it difficult to be fit in a single device that has limited resources. Moreover, the inference time of such models is large enough to be unfit to apply for the real-time mission critical applications like the Internet of Battlefield Things (IoBT). In IoBT, where the unmanned aerial vehicles (UAVs) flying in the battlefield zone and capturing images, require accurate learning and immediate inference. It becomes problematic if the learning model does not fit in a single UAV when it has limited resources. Considering the aforementioned issues, in this paper, we study a formal approach to improve the inference time and memory utilization in resource constrained IoBT. We consider that multiple UAVs are involved in the inference process in which we apply spatially parallel convolution and pooling operations for all the convolution layers and pooling layers as well as model parallelism for fully connected (FC) layers. Finally, we present the numerical results for varying number of participating UAVs, input data/image sizes, and communication speeds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.