Super resolution (SR) is a technique designed for increasing the spatial resolution in an image from a low resolution (LR) to high resolution (HR) size. SR technology has had a considerable demand in a wide variety of applications to recover HR images, such as medicine, engineering, computer vision, pattern recognition and video production, etc. In contrast to interpolation-based algorithms that often introduce distortions or irregular borders, this study proposes an implementation that can preserve the edges and fine details of an original image through the computation of the wavelet decomposition. Different Discrete Wavelet Transform (DWT) families such as: Daubechies, Symlet, and Coiflet were evaluated. The proposed system was implemented on a Raspberry Pi 4 model B, an embedded device, to get around the PC’s mobility limitations, making it possible to create an in-expensive and energy-efficient SR system, reducing their complexity in realtime applications. To investigate the visual performance, SR images have been analysed in subjective matter via human perception view, guaranteeing good perception for the images of different nature from three different datasets such as Full– HD (DIV2K), medical (Raabin WBC), and remote sensing (Sentinel-1). The experimental results of designed implementations appear to demonstrate good performance in commonly used objective criteria: execution time, SSIM, and PSNR (0.742 sec., 0.9164, and 38.72 dB), respectively for images with a super resolution size of 1356 x 2040 pixels.
Childhood leukaemia demands meticulous blood cell analysis for diagnosis, focusing on morphological irregularities like asymmetry and abnormal cell counts. Traditional manual diagnosis via microscopic blood smear images suffers from reduced reliability, time intensiveness, and observer variability. Computer-aided diagnostic (CAD) systems address these challenges. Integrating real-time image pre-processing and segmentation ensures swift operation, reducing the CAD system processing time. This enhances its overall effectiveness, enabling timely medical intervention and better patient outcomes. This study aims to simplify the algorithmic complexity of pre-processing steps, including bilateral filtering and Contrast-Limited Adaptive Histogram Equalization (CLAHE), alongside the segmentation stage involving morphological operations and the watershed algorithm. This work proposes a parallel implementation utilizing OpenMP and CUDA, evaluating its performance using accuracy and Intersection over Union (IoU) metrics along with computing time and algorithmic complexity. It highlights the benefits of parallel processing in enhancing efficiency and and accuracy in blood cell analysis.
A large amount of remote sensing data can be easily acquired due to the increase in the advances in sensor’s technologies. The sensors can generate high-dimensional data in a lower time producing problems related to big data such as management and organization. Since the acquired data is characterized by a large dimension and lack of structure, the information analysis becomes harder. Therefore, an organization stage should structure the data reducing the dimension while maintaining the main properties to enable further analysis. The feature extraction and selection methods can achieve this task. Consequently, we aim to explore various pixel-wise feature extraction and selection algorithms to manage the organization stage of big data for hyperspectral images. Our work covers the comparison between feature vectors computed using the discrete Fourier transform, discrete cosine transform (DCT), and stationary wavelet transform. Moreover, spectral angle mapper, Jeffries–Matusita distance, spectral information divergence, and linear discriminant analysis (LDA) were implemented as feature selectors. Feature extraction and selection methods were combined and evaluated in terms of algorithm complexity, reduction efficiency, and classification accuracy with the aid of a support vector machine and a maximum likelihood classifier. The analysis shows that some linear transformations can perform better in natural landscapes and others in urban images. Furthermore, the study found that the combination of DCT and LDA, which achieves high classification rates with an efficient dimension reduction, can be suitable for the organization stage of a big data remote sensing application of hyperspectral images.
This work is orientated towards time optimization of the hyperspectral images classification. This kind of images represents an immense computational cost in the course of processing, particularly in tasks such as feature extraction and classification. In fact, numerous techniques in the state-of-the-art have suggested a reduction in the dimension of the information. Nevertheless, real-time applications require a fast information shrinkage with a feature extraction included in order to conduce to an agile classification. To solve the mentioned problem, this study is composed of a time and algorithm complexity comparison between three different transformations: Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). Furthermore, three feature selection criteria are likewise analyzed: Jeffrries-Matusita Distance (JMD), Spectral Angle Mapper (SAM) and the unsupervised algorithm N-FINDR. An application that takes into consideration the study previously described is developed performing the parallel programming paradigm in multicore mode via utilizing a cluster of two Raspberry Pi units and, comparing it in time and algorithm complexity with the sequential paradigm. Moreover, a Support Vector Machine (SVM) is incorporated in the application to perform the classification. The images employed to test the algorithms were acquired by the Hyperion sensor, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), and the Reflective Optics System Imaging Spectrometer (ROSIS).
This article deals with a theoretical substantiation of infrared camera application for pipeline leaks detection. Three various physical principles of temperature anomalies appearance from pipeline leaks are considered. Models of pipeline and leaks thermal contrast are considered. Numerical meanings of temperature differences for three methods are determined. The required basic parameters of the thermal equipment for detection of pipeline leaks are determined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.