Precipitation forecasting has always been a challenging problem. Currently, meteorological radar echo data are used widely in the field of precipitation forecasting. From the radar echo images, the current precipitation situation can be obtained. Compared with the actual image, however, the radar echo extrapolation image has the disadvantages of echo loss and inaccurate echo tracks, which leads to low accuracy of the precipitation forecast. A more effective network based on the New-RainNet module for precipitation forecasting is proposed. The network is constructed by stacking New-RainNet modules that are established by adding a convolutional block attention module and a flexible switch from Adam to SGD optimization algorithm to the original convolutional long short-term memory network. The short-term and impending precipitation forecast in the next 0 to 60 min was successfully realized on the radar echo dataset of the Hong Kong Observatory and achieves satisfactory results. Experiments show that the network surpasses other methods in both network convergence and prediction accuracy. In addition, the radar echo image generated by the network prediction is also better than other methods in visual quality as it retains the image details and is closer to the real radar echo image than other methods.
Persons captured in real-life scenarios are generally in non-uniform scales. However, most generally acknowledged person re-identification (Re-ID) methods lay emphasis on matching normal-scale high-resolution person images. To address this problem, the ideas of existing image reconstruction techniques are incorporated which are expected contribute to recover accurate appearance information for low-resolution person Re-ID. In specific, this paper proposes a joint deep learning approach for Scale-Adaptive person Super-Resolution and Re-identification (SASR2 ). It is for the first time that scale-adaptive learning is jointly implemented for super-resolution and re-identification without any extra post-processing process. With the super-resolution module, the high-resolution appearance information can be automatically reconstructed from scales of low-resolution person images, bringing a direct beneficial impact on the subsequent Re-ID thanks to the joint learning nature of the proposed approach. It deserves noting that SASR2 is not only simple but also flexible, since it can be adaptable to person Re-ID on both multi-scale LR and normal-scale HR datasets. A large amount of experimental analysis demonstrates that SASR2 achieves competitive performance compared with previous low-resolution Re-ID methods especially on the realistic CAVIAR dataset.
To reduce the artifacts and improve the image quality in sparse-view CT reconstruction. A novel improved GoogLeNet is proposed to reduce artifacts of the sparse-view CT reconstruction. This paper uses the residual learning for GoogLeNet to study the artifacts of sparse-view CT reconstruction, and then subtracts the artifacts obtained by learning from the sparse reconstructed images, finally recovers a clear correction image. The intensity of reconstruction using the proposed method is very close to the full-view projective image. The results indicate that the proposed method is practical and effective for reducing the artifacts and preserving the quality of the reconstructed image.
Radiological imaging and image interpretation for clinical decision making are mostly specific to each body region such as head & neck, thorax, abdomen, pelvis, and extremities. For automating image analysis and consistency of results, standardizing definitions of body regions and the various anatomic objects, tissue regions, and zones in them becomes essential. Assuming that a standardized definition of body regions is available, a fundamental early step needed in automated image and object analytics is to automatically trim the given image stack into image volumes exactly satisfying the body region definition. This paper presents a solution to this problem based on the concept of virtual landmarks and evaluates it on whole-body positron emission tomography/computed tomography (PET/CT) scans. The method first selects a (set of) reference object(s), segments it (them) roughly, and identifies virtual landmarks for the object(s). The geometric relationship between these landmarks and the boundary locations of body regions in the craniocaudal direction is then learned through a neural network regressor, and the locations are predicted. Based on low-dose unenhanced CT images of 180 near whole-body PET/CT scans (which includes 34 whole-body PET/CT scans), the mean localization error for the boundaries of superior of thorax (TS) and inferior of thorax (TI), expressed as number of slices (slice spacing ≈ 4mm)), and using either the skeleton or the pleural spaces as reference objects, is found to be 3,2 (using skeleton) and 3, 5 (using pleural spaces) respectively, or in mm 13, 10 mm (using skeleton) and 10.5, 20 mm (using pleural spaces), respectively. Improvements of this performance via optimal selection of objects and virtual landmarks and other object analytics applications are currently being pursued.
and the skeleton and pleural spaces used as a reference objects
To reduce cupping artifacts and enhance contrast resolution in cone-beam CT (CBCT), in this paper, we introduce a new
approach which combines blind deconvolution with a level set method. The proposed method focuses on the
reconstructed image without requiring any additional physical equipment, is easily implemented on a single-scan
acquisition. The results demonstrate that the algorithm is practical and effective for reducing the cupping artifacts and
enhance contrast resolution on the images, preserves the quality of the reconstructed image, and is very robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.