Substantia nigra (SN) has been reported as significantly related to the progression of Parkinson’s Disease (PD). Fully automated segmentation of SN is an important step for developing an interpretable computer-aided diagnosis system for PD. Based on the deep learning techniques, this paper proposes a novel distance-reweighted loss function and combines it with the test-time normalization (TTN) to boost the fully automated SN segmentation accuracy from low contrast T2 weighted MRI. The proposed loss encourages the model to focus on the suspicious regions with vague boundaries, and the involved TTN narrows the gap between an input MRI volume and the reference MRI volumes in test-time. The results showed that both the proposed loss and TTN could help improve the segmentation accuracy. By combining the proposed loss and TTN, the averaged Dice coefficient achieved 70.90% from T2 weighted MRI, compared to 68.17% by the baseline method.
This paper presents a gradient-based analytical method for improving medical image classification. The automated classification of diseases is important in computer-aided diagnosis. In addition to accurate classification, its explainability is an essential facto. A gradient-based visual explanation provides the explainability of a model of an convolutional neural network (CNN). Most studies use this explanation to assess CNN's validity in a qualitative manner. Our motivation is to utilize the visual-explanation methods to enhance the classification accuracy of a CNN model. We propose a weight-analysis-based method to improve the classification accuracy of a trained-CNN model. The proposed method selects important patterns based on a gradient-based weight analysis of a middle layer in a trained model and suppresses irrelevant patterns in the extracted features for the classification. We applied our analytical method to a convolutional and a global-average-pooling layers in a CNN, which classifies a chest CT volume into COVID-19 typical and non-typical cases. As shown in classification results on 302 testing cases, our method improved the accuracy of the COVID-19 classification.
This paper introduces the improved method for the COVID-19 classification based on computed tomography (CT) volumes using a combination of a complex-architecture convolutional neural network (CNN) and orthogonal ensemble networks (OEN). The novel coronavirus disease reported in 2019 (COVID-19) is still spreading worldwide. Early and accurate diagnosis of COVID-19 is required in such a situation, and the CT scan is an essential examination. Various computer-aided diagnosis (CAD) methods have been developed to assist and accelerate doctors’ diagnoses. Although one of the effective methods is ensemble learning, existing methods combine some major models which do not specialize in COVID-19. In this study, we attempted to improve the performance of a CNN for the COVID-19 classification based on chest CT volumes. The CNN model specializes in feature extraction from anisotropic chest CT volumes. We adopt the OEN, an ensemble learning method considering inter-model diversity, to boost its feature extraction ability. For the experiment, We used chest CT volumes of 1283 cases acquired in multiple medical institutions in Japan. The classification result on 257 test cases indicated that the combination could improve the classification performance.
This paper proposes an automated classification method of COVID-19 chest CT volumes using improved 3D MLP-Mixer. Novel coronavirus disease 2019 (COVID-19) spreads over the world, causing a large number of infected patients and deaths. Sudden increase in the number of COVID-19 patients causes a manpower shortage in medical institutions. Computer-aided diagnosis (CAD) system provides quick and quantitative diagnosis results. CAD system for COVID-19 enables efficient diagnosis workflow and contributes to reduce such manpower shortage. In image-based diagnosis of viral pneumonia cases including COVID-19, both local and global image features are important because viral pneumonia cause many ground glass opacities and consolidations in large areas in the lung. This paper proposes an automated classification method of chest CT volumes for COVID19 diagnosis assistance. MLP-Mixer is a recent method of image classification using Vision Transformer-like architecture. It performs classification using both local and global image features. To classify 3D CT volumes, we developed a hybrid classification model that consists of both a 3D convolutional neural network (CNN) and a 3D version of the MLP-Mixer. Classification accuracy of the proposed method was evaluated using a dataset that contains 1205 CT volumes and obtained 79.5% of classification accuracy. The accuracy was higher than that of conventional 3D CNN models consists of 3D CNN layers and simple MLP layers.
Neuromelanin magnetic resonance imaging (NM-MRI) has been widely used in the diagnosis of Parkinson’s disease (PD) for its significantly enhanced contrast between the PD-related structure, the substantia nigra (SN) and surrounding tissues. This paper proposes a novel network combining the priority gating attention and Bayesian learning for improving the accuracy of fully automatic SN segmentation from NM-MRI. Different from the conventional gated attention model, the proposed network uses the prior SN probability map for guiding the attention computation. Additionally, to lower the risks of over-fitting and estimate the confidence scores for the segmentation results, Bayesian learning with Monte Carlo dropout is applied in the training and testing phases. The quantitative results showed that the proposed network acquired the averaged Dice score of 79.46% in comparison with the baseline model 77.93%.
Automatic segmentation of the Parkinson’s disease-related tissue, the substantia nigra (SN), is an important step towards the accurate computer-aided diagnosis systems. Recently deep learning methods have achieved the state-of-the-art performance of the automated segmentation in various scenarios of medical image analysis. However, to acquire high resolution segmentation results, the conventional deep learning frameworks depend heavily on the full size of annotated data, which is pretty time-consuming and expensive for the training of the model. Moreover, the SN structure is usually tiny and sensitive to the progression of Parkinson’s disease (PD), which brings more anatomic variations among cases. To deal with these problems, this paper combines the cascaded fully convolutional network (FCN) and the size-reweighted loss function to automatically segment the tiny subcortical tissue SN from T2 MRI volumes. Different from the conventional one-stage FCNs, we cascade two FCNs in a coarse-to-fine fashion for the high resolution segmentation of the SN. The first FCN is trained to locate the SN-contained ROI and produce a coarse segmentation mask from a down-sampled MRI volume. The second FCN solely segments the SN at full resolution based on the results of the first FCN. Additionally, by giving higher weights to the SN region, the size-reweighted loss function encourages the model to concentrate on the tiny SN structure. Our results showed that the proposed FCN achieves mean dice score of 68.92% in comparison with the baseline model 66.40%.
We propose a pattern-expression method based on rank-one tensor decomposition for analysis for substantia nigra in T2-weighted images. Capturing discriminative features in observed medical data is an important task in diagnosis. In diagnosing Parkinson’s disease, capturing the change of volumetric data of substantia nigra supports the clinical diagnosis. Furthermore, in drug discovery researches for Parkinson’s disease, statistical evaluations of changes of substantia nigra, which are caused by a developed medicine, also might be necessary. Therefore, we tackle the development of the pattern-expression method to analyse volumetric data of substantia nigra. Experimental results showed the different distributions of computed coefficients for rank-one tensors between Parkinson’s disease and healthy state. The results indicated the validity of the tensor-decomposition-based pattern-expression method for the analysis.
This paper proposes an automated classification method of chest CT volumes based on likelihood of COVID-19 cases. Novel coronavirus disease 2019 (COVID-19) spreads over the world, causing a large number of infected patients and deaths. Sudden increase in the number of COVID-19 patients causes a manpower shortage in medical institutions. Computer-aided diagnosis (CAD) system provides quick and quantitative diagnosis results. CAD system for COVID-19 enables efficient diagnosis workflow and contributes to reduce such manpower shortage. This paper proposes an automated classification method of chest CT volumes for COVID-19 diagnosis assistance. We propose a COVID-19 classification convolutional neural network (CNN) that has a 2D/3D hybrid feature extraction flows. The 2D/3D hybrid feature extraction flows are designed to effectively extract image features from anisotropic volumes such as chest CT volumes for diagnosis. The flows extract image features on three mutually perpendicular planes in CT volumes and then combine the features to perform classification. Classification accuracy of the proposed method was evaluated using a dataset that contains 1288 CT volumes. An averaged classification accuracy was 83.3%. The accuracy was higher than that of a classification CNN which does not have 2D and 3D hybrid feature extraction flows.
This paper presents segmentation of multiple organ regions from non-contrast CT volume based on deep learning. Also, we report usefulness of fine-tuning using a small number of training data for multi-organ regions segmentation. In medical image analysis system, it is vital to recognize patient specific anatomical structures in medical images such as CT volumes. We have studied on a multi-organ regions segmentation method from contrast-enhanced abdominal CT volume using 3D U-Net. Since non-contrast CT volumes are also usually used in the medical field, segmentation of multi-organ regions from non-contrast CT volume is also important for the medical image analysis system. In this study, we extract multi-organ regions from non-contrast CT volume using 3D U-Net and a small number of training data. We perform fine-tuning from a pre-trained model obtained from the previous studies. The pre-trained 3D U-Net model is trained by a large number of contrast enhanced CT volumes. Then, fine-tuning is performed using a small number of non-contrast CT volumes. The experimental results showed that the fine-tuned 3D U-Net model could extract multi-organ regions from non-contrast CT volume. The proposed training scheme using fine-tuning is useful for segmenting multi-organ regions using a small number of training data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.