Substantia nigra (SN) has been reported as significantly related to the progression of Parkinson’s Disease (PD). Fully automated segmentation of SN is an important step for developing an interpretable computer-aided diagnosis system for PD. Based on the deep learning techniques, this paper proposes a novel distance-reweighted loss function and combines it with the test-time normalization (TTN) to boost the fully automated SN segmentation accuracy from low contrast T2 weighted MRI. The proposed loss encourages the model to focus on the suspicious regions with vague boundaries, and the involved TTN narrows the gap between an input MRI volume and the reference MRI volumes in test-time. The results showed that both the proposed loss and TTN could help improve the segmentation accuracy. By combining the proposed loss and TTN, the averaged Dice coefficient achieved 70.90% from T2 weighted MRI, compared to 68.17% by the baseline method.
This paper presents a gradient-based analytical method for improving medical image classification. The automated classification of diseases is important in computer-aided diagnosis. In addition to accurate classification, its explainability is an essential facto. A gradient-based visual explanation provides the explainability of a model of an convolutional neural network (CNN). Most studies use this explanation to assess CNN's validity in a qualitative manner. Our motivation is to utilize the visual-explanation methods to enhance the classification accuracy of a CNN model. We propose a weight-analysis-based method to improve the classification accuracy of a trained-CNN model. The proposed method selects important patterns based on a gradient-based weight analysis of a middle layer in a trained model and suppresses irrelevant patterns in the extracted features for the classification. We applied our analytical method to a convolutional and a global-average-pooling layers in a CNN, which classifies a chest CT volume into COVID-19 typical and non-typical cases. As shown in classification results on 302 testing cases, our method improved the accuracy of the COVID-19 classification.
Neuromelanin magnetic resonance imaging (NM-MRI) has been widely used in the diagnosis of Parkinson’s disease (PD) for its significantly enhanced contrast between the PD-related structure, the substantia nigra (SN) and surrounding tissues. This paper proposes a novel network combining the priority gating attention and Bayesian learning for improving the accuracy of fully automatic SN segmentation from NM-MRI. Different from the conventional gated attention model, the proposed network uses the prior SN probability map for guiding the attention computation. Additionally, to lower the risks of over-fitting and estimate the confidence scores for the segmentation results, Bayesian learning with Monte Carlo dropout is applied in the training and testing phases. The quantitative results showed that the proposed network acquired the averaged Dice score of 79.46% in comparison with the baseline model 77.93%.
Automatic segmentation of the Parkinson’s disease-related tissue, the substantia nigra (SN), is an important step towards the accurate computer-aided diagnosis systems. Recently deep learning methods have achieved the state-of-the-art performance of the automated segmentation in various scenarios of medical image analysis. However, to acquire high resolution segmentation results, the conventional deep learning frameworks depend heavily on the full size of annotated data, which is pretty time-consuming and expensive for the training of the model. Moreover, the SN structure is usually tiny and sensitive to the progression of Parkinson’s disease (PD), which brings more anatomic variations among cases. To deal with these problems, this paper combines the cascaded fully convolutional network (FCN) and the size-reweighted loss function to automatically segment the tiny subcortical tissue SN from T2 MRI volumes. Different from the conventional one-stage FCNs, we cascade two FCNs in a coarse-to-fine fashion for the high resolution segmentation of the SN. The first FCN is trained to locate the SN-contained ROI and produce a coarse segmentation mask from a down-sampled MRI volume. The second FCN solely segments the SN at full resolution based on the results of the first FCN. Additionally, by giving higher weights to the SN region, the size-reweighted loss function encourages the model to concentrate on the tiny SN structure. Our results showed that the proposed FCN achieves mean dice score of 68.92% in comparison with the baseline model 66.40%.
We propose a pattern-expression method based on rank-one tensor decomposition for analysis for substantia nigra in T2-weighted images. Capturing discriminative features in observed medical data is an important task in diagnosis. In diagnosing Parkinson’s disease, capturing the change of volumetric data of substantia nigra supports the clinical diagnosis. Furthermore, in drug discovery researches for Parkinson’s disease, statistical evaluations of changes of substantia nigra, which are caused by a developed medicine, also might be necessary. Therefore, we tackle the development of the pattern-expression method to analyse volumetric data of substantia nigra. Experimental results showed the different distributions of computed coefficients for rank-one tensors between Parkinson’s disease and healthy state. The results indicated the validity of the tensor-decomposition-based pattern-expression method for the analysis.
We propose a method for the three-dimensional reconstruction from a single-shot colonoscopic image. Extracting a three-dimensional colon structure is an important task in colonoscopy. However, a colonoscope captures only two-dimensional information as colonoscopic images. Therefore, an estimation of three-dimensional information from two-dimensional images has potential demands. In this paper, we integrate deep-learning-based depth estimation to three-dimensional reconstruction. This approach omits the inaccurate corresponding matching from the procedure of conventional three-dimensional reconstruction. We experimentally demonstrated accurate reconstructions with comparisons between a polyp size in three-dimensional reconstruction and an endoscopist's measurement.
This paper proposes an intestinal region reconstruction method from CT volumes of ileus cases. Binarized intestine segmentation results often contain incorrect contacts or loops. We utilize the 3D U-Net to estimate the distance map, which is high only at the centerlines of the intestines, to obtain regions around the centerlines. Watershed algorithm is utilized with local maximums of the distance maps as seeds for obtaining “intestine segments”. Those intestine segments are connected as graphs, for removing incorrect contacts and loops and to extract “intestine paths”, which represent how intestines are running. Experimental results using 19 CT volumes showed that our proposed method properly estimated intestine paths. These results were intuitively visualized for understanding the shape of the intestines and finding obstructions.
Subarachnoid Hemorrhage (SAH) detection is a critical, severe problem that confused clinical residents for a long time. With the rise of deep learning technologies, SAH detection made a significant breakthrough in recent ten years. Whereas, the performances are significantly degraded on imbalanced data, makes deep learning models have always suffered criticism. In this study, we present a DenseNet-LSTM network with Class-Balanced Loss and the transfer learning strategy to solve the SAH detection problem on an extremely imbalanced dataset. Compared to the previous works, the proposed framework not merely effectively integrate greyscale features the and spatial information from the consecutive CT scans, but also employ Class-Balanced loss and transfer learning to alleviate the adverse effects and broaden feature diversity respectively on an extreme SAH cases scarcity dataset, mimicking the actual situation of emergency departments. Comprehensive experiments are conducted on a dataset, consisted of 2,519 cases without hemorrhage cases and only 33 cases with SAH. Experimental results demonstrate the F-measure score of SAH detection achieved a remarkable improvement, the backbone DenseNet121 gained around 33% promotion after transfer learning, and on this basis, importing the Class-Balanced Loss and the LSTM structure, the F-measure score further increased 6.1% and 2.7% sequentially.
Endoscopic submucosal dissection is a minimally invasive treatment for early gastric cancer. In endoscopic submucosal dissection, a physician directly removes the mucosa around the lesion under internal endoscopy by using the flush knife. However, the flush knife may accidentally pierce the colonic wall and generate a perforation on it. If physicians overlooking a small perforation, a patient may need emergency open surgery, since a perforation can easily cause peritonitis. For the prevention of overlooking of perforations, a computer-aided diagnosis system has a potential demand. We believe automatic perforation detection and localization function is very useful for the analysis of endoscopic submucosal dissection videos for the development of a computeraided diagnosis system. At current stage, the research of perforation detection and localization progress slowly, automatic image-based perforation detection is very challenge. Thus, we devote to the development of detection and localization of perforations in colonoscopic videos. In this paper, we proposed a supervised-learning method for perforations detection and localization in colonoscopic videos. This method uses dense layers in YOLO-v3 instead of residual units, and a combination of binary cross entropy and generalized intersection over union loss as the loss function in the training process. This method achieved 0.854 accuracy, 0.850 AUC score and 0.884 mean average precision for perforation detection and localization, respectively, as an initial study
KEYWORDS: Visualization, Image classification, Visual analytics, Image filtering, Convolution, Data modeling, Solid modeling, Feature extraction, Medical research, RGB color model
Purpose of this paper is to present a method for visualising decision-reasoning regions in computer-aided pathological pattern diagnosis of endocytoscopic images. Endocytoscope enables us to perform direct observation of cells and their nuclei on the colon wall at maximum 500-times ultramagnification. For this new modality, computer-aided pathological diagnosis system is strongly required for the support of non-expert physicians. To develop a CAD system, we adopt convolutional neural network (CNN) as the classifier of endocytoscopic images. In addition to this classification function, based on CNN weights analysis, we develop a filter function that visualises decision-reasoning regions on classified images. This visualisation function helps novice endocytoscopists to develop their understanding of pathological pattern on endocytoscopic images for accurate endocytoscopic diagnosis. In numerical experiment, our CNN model achieved 90 % classification accuracy. Furthermore, experimental results show that decision-reasoning regions suggested by our filter function contain characteristic pit patterns in real endocytoscopic diagnosis.
This paper presents a visualization method of intestine (the small and large intestine) regions and their stenosed parts caused by ileus from CT volumes. Since it is difficult for non-expert clinicians to find stenosed parts, the intestine and its stenosed parts should be visualized intuitively. Furthermore, the intestine regions of ileus cases are quite hard to be segmented. The proposed method segments intestine regions by 3D FCN (3D U-Net). Intestine regions are quite difficult to be segmented in ileus cases since the inside the intestine is filled with liquids. These liquids have similar intensities with intestinal wall on 3D CT volumes. We segment the intestine regions by using 3D U-Net trained by a weak annotation approach. Weak-annotation makes possible to train the 3D U-Net with small manually-traced label images of the intestine. This avoids us to prepare many annotation labels of the intestine that has long and winding shape. Each intestine segment is volume-rendered and colored based on the distance from its endpoint in volume rendering. Stenosed parts (disjoint points of an intestine segment) can be easily identified on such visualization. In the experiments, we showed that stenosed parts were intuitively visualized as endpoints of segmented regions, which are colored by red or blue.
Micro-CT is a nondestructive scanning device that is capable of capturing three dimensional structures at _m level. With the spread of this device uses in medical fields, it is expected that this device may bring further understanding of the human anatomy by analyzing three-dimensional micro structure from volume of in vivo specimens captured by micro-CT. In the topic of micro structure analysis of lung, the methods for extracting surface structures including the interlobular septa and the visceral pleura were not commonly studied. In this paper, we introduce a method to extract sheet structure such as the interlobular septa and the visceral pleura from micro-CT volumes. The proposed method consists of two steps: Hessian analysis based method for sheet structure extraction and Radial Structure Tensor combined with roundness evaluation for hollow-tube structure extraction. We adopted the proposed method on complex phantom data and a medical lung micro-CT volume. We confirmed the extraction of the interlobular septa from medical volume from experiments.
Measurement of a polyp size is an essential task in colon cancer screening, since the polyp-size information has critical roles for decision on colonoscopy. However, an estimation of a polyp size from a single view of colonoscope without a measurement device is quite difficult even for expert physicians. To overcome this difficulty, automated size estimation techniques would be desirable for clinical scenes. This paper presents polyp-size classification method with a single colonoscopic image for colonoscopy. Our proposed method estimates depth information from a single colonoscopic image with trained model and utilises the estimated information for the classification. In our method, the model for depth information is obtained by deep learning with colonoscopic videos. Experimental results show the achievement of binary and trinary polyp-size classification with 79% and 74% accuracy from a single still image of a colonoscopic movie.
This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.