Due to far imaging distance and relatively harsh imaging conditions, the spatial resolution of remote sensing data are relatively low. Images/videos super-resolution is of great significance to effectively improve the spatial resolution and visual effect of remote sensing data. In this paper, we propose a deep-learning-based video super-resolution method for Jilin-1 remote sensing satellite. We use explicit motion compensation method by calculating the optical flow through the optical flow estimation network and compensating the motion of the image through warp operation. After obtaining the multi-frame images after motion compensation, it is necessary to use multi-frame image fusion for super-resolution reconstruction. We performed super-resolution experiments with scale factor 4 on Jilin-1 video dataset. In order to explore suitable fusion method, we compared two kinds of image fusion methods in the super-resolution network, i.e. concatenation by channel and 3D convolution, without motion compensation. Experimental results show that 3D convolution achieves better super-resolution performance, and video super-resolution result is better than the compared single image super-resolution method. We also performed experiments with motion compensation by optical flow estimation network. Experimental results show that the difference between the image after motion compensation and the reference frame becomes smaller. This indicates that the explicit motion compensation method can compensate the difference between the frames due to the target motion to a certain extent.
Crop classification is a representative problem in multispectral remote sensing image (RSI) classification, and has significance in country food security, ecological security, production estimate, crop growth supervision, and so on. It has attracted increasing attention of many researchers around the world especially after the development of convolutional neural networks (CNN). General CNN-based multispectral RSI classification methods may be not suitable for labeled samples with limited numbers and areas. Other pixel-based classification methods are always affected by noise and ignore spatial information. Focusing on these problems, this paper presents an approach based on lightened CNN for crop classification with a small number of tiny size labeled samples in multispectral images. The contribution of this work is to construct a lightened CNN model for crop classification with small samples in multispectral image. It avoids overfitting of deep CNN and reduces the requirement for the size of training samples. We adopt two-layer fully convolutional network (FCN) to extract features. The first layer uses a convolutional kernel of size 1 and outputs 16-band feature map to obtain spectral band information. Spatial information is extracted in the sequential layer using convolutional kernel of size 3, step 1 and padding 1. Thus the feature map after FCN and the labeled area have the same size. Finally, we use a fully connected layer and a softmax classifier for classification. Our experiment was conducted on 8-band multispectral image of size 50362-by-17810 pixels. There are 5 classes in the multispectral image, namely rice, soy, corn, non-crop, and uncertainty. The experimental result which achieves 86.28% accuracy indicates the good performance of our network for crop classification in multispectral RSIs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.