Infrared image recognition technology has a wide range of applications in the field of gas detection. Unlike visible light images, gas detection in infrared images is relatively difficult due to the lack of clear contrast and the relative blurriness of gas targets. This paper proposes a weakly supervised distillation network to address the issue of low detection accuracy of gas regions in infrared images in complex scenes. This method mainly generates accurate heatmaps as pseudo labels by utilizing complex class activation mappings; Using pseudo labels to train the specialized model proposed in this article, more accurate heat map results are generated, and finally the heat map results are fused with the foreground obtained based on background difference method to reduce false positives in combustible gas detection results. The experimental results show that the proposed method has high accuracy in various scenarios, and the model can efficiently run in embedded systems, effectively solving the problem of infrared gas recognition in complex scenarios.
In pediatric patients with respiratory abnormalities, it is important to understand the alterations in regional dynamics of the lungs and other thoracoabdominal components, which in turn requires a quantitative understanding of what is considered as normal in healthy children. Currently, such a normative database of regional respiratory structure and function in healthy children does not exist. The purpose of this study is to introduce a large open-source normative database from our ongoing Virtual Growing Child (VGC) project, which includes measurements of volumes, architecture, and regional dynamics in healthy children (six to 20 years) derived via dynamic Magnetic Resonance Imaging (dMRI) images. The database provides four categories of regional respiratory measurement parameters including morphological, architectural, dynamic, and developmental. The database has 3,820 3D segmentations (around 100,000 2D slices with segmentations), which to our knowledge is the largest dMRI dataset of healthy children. The database is unique and provides dMRI images, object segmentations, and quantitative regional respiratory measurement parameters for healthy children. The database can serve as a reference standard to quantify regional respiratory abnormalities on dMRI in young patients with various respiratory conditions and facilitate treatment planning and response assessment. The database can be useful to advance future AI-based research on MRI-based object segmentation and analysis.
It is important to understand the dynamic thoracoabdominal architecture and its change after surgery since thoracic insufficiency syndrome (TIS) patients often suffer from spinal deformation, leading to alterations in regional respiratory structure and function. Free-breathing based quantitative dynamic MRI (QdMRI) provides a practical solution to evaluate the regional dynamics of the thorax quantitatively for TIS patients. Our current aim is to investigate if QdMRI can also be utilized to measure architecture for TIS patients before and after surgery. 49 paired TIS patients (before and after surgery, with 98 dynamic MRI), and another 150 healthy children comprise our study cohort. 248 dynamic MRI images were first acquired and then 248 4D images were constructed. 3D volume images at end expiration (EE) and end inspiration (EI) were used in the analysis, leading to a total of 496 3D volume images in this study. Left and right lungs, left and right hemi-diaphragms, left and right kidneys, and liver were then segmented automatically via deep learning prior to architectural analysis. Architectural parameters (3D distances and angles from the centroids of multiple objects) at EE and EI of TIS patients and healthy children were computed and compared via t-testing. The distance between the right lung and right hemi-diaphragm is found to be significantly larger at EI than that at EE for TIS patients and healthy children, and after surgery becomes closer to that of healthy children.
Lung segmentation in dynamic thoracic magnetic resonance imaging (dMRI) is a critical step for quantitative analysis of thoracic structure and function in patients with respiratory disorders. Some semi-automatic and automatic lung segmentation methods based on traditional image processing models have been proposed mainly for CT with good performance. However, the low efficiency and robustness of these methods and inapplicability to dMRI make them unsuitable to segment the large numbers of dMRI datasets. In this paper, we present a novel automatic lung segmentation approach for dMRI based on two-stage convolutional neural networks (CNNs). In the first stage, we utilize the modified min-max normalization method to pre-process MRI for increasing the contrast between the lung and surrounding tissue and propose a corner-points and CNN based region of interest (ROI) detection strategy to extract the lung ROI from sagittal dMRI slices, which can reduce the negative influence of tissues located far away from the lung. In the second stage, we input the adjacent ROIs of target slices into the modified 2D U-Net to segment the lung tissue. The qualitative and quantitative results demonstrate that our approach achieves high accuracy and stability in terms of lung segmentation for dMRI.
Obstructive sleep apnea syndrome (OSAS) is one of the most common sleep disorders that endangers human health, which is associated with episodes of apnea or hypopnea during sleep. In children, OSAS is associated with cardiovascular morbidity, neurobehavioral deficits, and poor quality of life, which highlights the importance for early diagnosis and treatment. Recent studies using dynamic magnetic resonance imaging (dMRI) have shown that adults with OSAS exhibit airway narrowing in specific regions that display increased variability in diameter during sleep as compared with controls. In this paper, we propose a novel method to compare OSAS patients with control subjects during awake and asleep states to assess the regional dynamic changes that occur in specific locations of the upper airway. Firstly, we segment the 3D upper airway with a previously developed fully automatic method. Then, different types of breathing cycles are selected by experts based on polysomnography. For each cycle, we calculate the distance of each point on the surface of the upper airway from end-expiration (EE) to end-inspiration (EI), which is then utilized for subsequent motion analysis. The 3D upper airway is subsequently divided into 4 anatomical parts manually. Lastly, comparisons of the dynamic upper airway motion measurements from different cycle groups are performed between OSAS patients and control subjects. Comparisons of different types of cycles within the same anatomical part demonstrated significant differences between control subjects and OSAS patients in all anatomic parts with some exceptions. These novel observations may provide some insights into the pathophysiology of OSAS.
Quantitative thoracic dynamic magnetic resonance imaging (QdMRI), a recently developed technique, provides a potential solution for evaluating treatment effects in thoracic insufficiency syndrome (TIS). In this paper, we integrate all related algorithms and modules during our work from the past 10 years on TIS into one system, named QdMRI, to address the following questions: (1) How to effectively acquire dynamic images? For many TIS patients, subjects are unable to cooperate with breathing instructions during image acquisition. Image acquisition can only be implemented under freebreathing conditions, and it is not feasible to use a surrogate device for tracing breathing signals. (2) How to assess the thoracic structures from the acquired image, such as lungs, left and right, separately? (3) How to depict the dynamics of thoracic structures due to respiration motion? (4) How to use the structural and functional information for the quantitative evaluation of surgical TIS treatment and for the design of the surgery plan? The QdMRI system includes 4 major modules: dynamic MRI (dMRI) acquisition, 4D image construction, image segmentation (from 4D image), and visualization of segmentation results, dynamic measurements, and comparisons of measurements from TIS patients with those from normal children. Scanning/image acquisition time for one subject is ~20 minutes, 4D image construction time is ~5 minutes, image segmentation of lungs via deep learning is 70 seconds for all time points (with the average DICE 0.96 in healthy children), and measurement computation time is 2 seconds.
KEYWORDS: Magnetic resonance imaging, Convolutional neural networks, Image segmentation, 3D magnetic resonance imaging, Visualization, Tissues, Quantitative analysis, 3D modeling
Upper airway segmentation in static and dynamic MRI is a prerequisite step for quantitative analysis in patients with disorders such as obstructive sleep apnea. Recently, some semi-automatic methods have been proposed with high segmentation accuracy. However, the low efficiency of such methods makes it difficult to implement for the processing of large numbers of MRI datasets. Therefore, a fully automatic upper airway segmentation approach is needed. In this paper, we present a novel automatic upper airway segmentation approach based on convolutional neural networks. Firstly, we utilize the U-Net network as the basic model for learning the multi-scale feature from adjacent image slices and predicting the pixel-wise label in MRI. In particular, we train three networks with the same structure for segmenting the pharynx/larynx and nasal cavity separately in axial static 3D MRI and axial dynamic 2D MRI. The visualization and quantitative results demonstrate that our approach can be applied to various MRI acquisition protocols with high accuracy and stability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.