Currently, when performing pixel-level semantic segmentation of crop planting in high-resolution images, it is difficult for deep convolutional neural networks to simultaneously capture global features and local detailed features at multiple scales in space. This can lead to blurred boundary contours between different farmland plots, as well as lower integrity within similar farmland areas. In view of the above research content, a model that integrates Transformer and CNN is proposed to classify and identify crops in the study area, using drone remote sensing images from a competition as the data source. (1) The model adopts a multi-level skip-connected encoder-decoder network architecture. The encoding part of the model uses an improved MobileNetV2 as the front-end feature extractor to extract local detail features, and then inputs the extracted features into Vision Transformer for global feature capture and further processing, thereby capturing details while retaining global context information. (2) The decoding part adopts the design of UNet, extracting features from different levels of the encoding part and directly transferring them to the corresponding levels of the decoding part through skip connections to ensure the retention of detail information, and using upsampling layers to gradually restore the spatial resolution of the image. In an experiment on a public competition dataset, the experimental results show that the MIoU of the network proposed in this paper reaches 85.96%, the PA reaches 92.30%, and the Dice value reaches 0.922, which has the highest segmentation accuracy compared with the comparison network.
Liver cancer is one of the common diseases. It is of great significance to realize the automatic and accurate segmentation of liver and tumors in clinical medicine. The boundary blurring and the difficulty of feature extraction in CT images of liver and tumors are perplex, an improved medical segmentation model SKE-Unet++based on Unet++ was proposed. The SKE-Unet++ model adopts the connection mode of full-scale feature fusion and integrates coarse-grained semantics and fine-grained semantic information extraction features at full scale. In order to extract more important features after convolution, the SE (Squeeze Excitation) module is added to enhance the channel features of the input feature map. The foreground sample in CT image is much smaller than the background sample, cross entropy loss function and dice loss function are combined to solve the class imbalance problem. Compared with the SKE-Unet++ model, the Dice, Jaccard and ASSD evaluation indexes of liver segmentation task in Lits data set increased by 1.03%, 0.29% and 0.2897mm respectively, and the three indexes of liver tumor segmentation task increased by 7.54%, 9.33% and 1.2077mm respectively, which confirmed the validity of the model, and provided reference for automatic segmentation of liver cancer medical images.
The height of Tetraena mongolica Maxim is a marker to measure the growth rate and an important index to analyze its growth condition.In this study, the Tetraena in a 50m×50m size sample plot were used as the research object, and the images taken by the UAV mounted camera and the 3D point cloud data taken by the ground-based radar were used to combine the 2D data from the UAV with the 3D data from the ground-based radar. The unsupervised classification and supervised classification by support vector machine, random forest and maximum likelihood methods are performed, and the height is obtained by kriging interpolation of the point cloud data, and the image data and point cloud data are fused. Using this method, the maximum height of 0.504m and the vegetation coverage is 13.11%, which provides a new research method for the effective protection of the Tetraena.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.