PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9811, including the Title Page, Copyright information, Table of Contents, Introduction, Authors, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, several semiconducting oxide materials such as typical indium tin oxide are widely used as the transparent conducting electrodes (TCEs) in liquid crystal microlens arrays. In this paper, we fabricate a liquid crystal microlens array using graphene rather than semiconducting oxides as the TCE. Common optical experiments are carried out to acquire the focusing features of the graphene-based liquid crystal microlens array (GLCMLA) driven electrically. The acquired optical fields show that the GLCMLA can converge incident collimating lights efficiently. The relationship between the focal length and the applied voltage signal is presented. Then the GLCMLA is deployed in a plenoptic camera prototype and the raw images are acquired so as to verify their imaging capability. Our experiments demonstrate that graphene has already presented a broad application prospect in the area of adaptive optics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach for representing and evaluating surface plasmonic lightening through cylindrical liquid crystal microlens arrays (CLCMAs) of 128×128, is proposed. The CLCMAs are typical sandwiched structures, in which the LC materials with a thickness of ~20μm is fully filled into a preshaped microcavity with a pair of parallel electrodes fabricated by silica wafers coated by an indium-tin-oxide (ITO) film. The top electrode is patterned using an arrayed micro-rectangle-hole with a size of 200×60μm2 and a minimum spacing of 50μm. The surface plasmonic radiation is excited and further participates the focusing of incident beams in the visible range. The output light fields involving the plasmonic radiation are investigated. Rising the voltage signal from ~1.4 to ~5.5VRMS, the excited plasmonic radiation will sequentially present typical states including the beam converging state, focusing together with partial incident beams, and lightening mainly along the edge of individual ITO micro-rectangle-hole.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing satellites play an increasingly prominent role in environmental monitoring and disaster rescue. Taking advantage of almost the same sunshine condition to same place and global coverage, most of these satellites are operated on the sun-synchronous orbit. However, it brings some problems inevitably, the most significant one is that the temporal resolution of sun-synchronous orbit satellite can’t satisfy the demand of specific region monitoring mission. To overcome the disadvantages, two methods are exploited: the first one is to build satellite constellation which contains multiple sunsynchronous satellites, just like the CHARTER mechanism has done; the second is to design non-predetermined orbit based on the concrete mission demand. An effective method for remote sensing satellite orbit design based on multiobjective evolution algorithm is presented in this paper. Orbit design problem is converted into a multi-objective optimization problem, and a fast and elitist multi-objective genetic algorithm is utilized to solve this problem. Firstly, the demand of the mission is transformed into multiple objective functions, and the six orbit elements of the satellite are taken as genes in design space, then a simulate evolution process is performed. An optimal resolution can be obtained after specified generation via evolution operation (selection, crossover, and mutation). To examine validity of the proposed method, a case study is introduced: Orbit design of an optical satellite for regional disaster monitoring, the mission demand include both minimizing the average revisit time internal of two objectives. The simulation result shows that the solution for this mission obtained by our method meet the demand the users’ demand. We can draw a conclusion that the method presented in this paper is efficient for remote sensing orbit design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the planar micro-nano-coils (PMNCs) with diverse planar spiral structures are designed for electrically driving and controlling liquid crystal microlenses (LCMs) based on wireless power transmission approaches. The PMNCs with different basic shapes are fabricated, including typical micro-triangle, micro-square, micro-pentagon, micro-hexagon, and micro-circle. According to the designed microstructures, using loop iterative approximation means based on Greenhouse algorithm, the inductance values of the microcoils can be calculated through combining self-inductance with mutual-inductance. In experiments, both the wet and dry etching technologies are adapted to obtain the desired PMNCs over aluminum-coated glass substrates. The etching technologies utilized by us are implemented on initial glass substrates spread by photoresist mask, which has been processed by common ultraviolet lithography. And the wet and dry etching technologies are different in the way of eroding aluminum film. Usually, the wet etching is a kind of the chemical reaction of alkali element in the developing liquid used, but the dry etching is a type of physical etching process such as the ion beam etching so as to fabricate microstructures with smaller size than that of wet etching. After the fabrication of the PMNCs, the electrical testing circuit for the inductance of the PMNCs is built to obtain their actual inductance values. By comparing inductances with theoretical prediction, the improved PMNCs are proposed for driving and controlling LCMs, which demonstrates enhanced light transmission efficiency of the PMNCs, and makes it more efficient to adjust LCMs developed by us.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A two-layered construction composed of micro-nano-structures is fabricated to investigate the key properties of surface plasmon polaritons (SPPs) in the range of terahertz wave. The construction is mainly two layered micro-nano-structures (MNSs), and the utilized substrates are silicon materials currently. One silicon substrate is covered by a layer of indium tin oxide (ITO). Another silicon substrate is sputtered by a thin aluminum film, which is further patterned to shape functioned sub-wavelength aluminum structures (SWASs). Both aluminum film and ITO film are coupled to form a micro-cavity using micro-spheres spacers. The typical terahertz (THz) transmission of the construction is measured. The experimental results demonstrate that some extraordinary transmission peaks called the extraordinary optical transmission (EOT) appear in THz transmittance spectrum. The analysis results indicate that THz radiation excites effectively SPPs over the SWASs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An arrayed electrically tunable infrared (IR) filter based on the key structure of liquid crystal Fabry-Perot (LC-FP) working in the wavelength range from 2.5 to 12 μm, is designed and fabricated successfully. According to the electrically controlled birefringence characteristics of nematic LC molecules, the refractive index of LC materials filled into a prefabricated microcavity can be adjusted by the spatial electric field stimulated between the top aluminum electrode patterned by conventional UV-photolithography and the bottom aluminum electrode in the LC-FP. The particular functions including key spectral selection and spectral adjustment, can be performed by the developed LC-FP filter driven and controlled electrically. Our experiments show that the maximum transmittance of the transmission peaks is ~24% and the peaks of transmission spectrum shift through applying different voltage signals with a root mean square (RMS) value ranging from 0 to ~21.7Vrms. The experimental results are consistent with the simulation according to the model constructed by us. As a 4-channel array-type IR filter, the top electrode of the device is composed of four same sub-electrodes, which is powered, respectively, to select desired transmission spectrum. Each of the units in the device is operated separately and synchronously, which means that spectral images of the same object can be obtained with different wavelengths in one shot. Without any mechanical parts, the developed LC-FP filter exhibits several advantages including ultra-small size, low cost, high reliability, high spectral selectivity, and compact integration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on our previous works on liquid crystal microlenses driven and adjusted electrically, we present a new type of liquid crystal microlens arrays with dual-mode function (DLCMAs). Currently, the DLCMAs developed by us consist of a top electrode couple constructed by two layers of controlling electrode structure, and a bottom electrode. The top two electrode layers are respectively deposited over both sides of a glass substrate and insulated by a thin SiO2 coating, so as to act as the mode-control-part in the DLCMAs. Another planar electrode layer acting as the base electrode is deposited over the surface of a glass substrate. Two glass substrates with fabricated electrode structure are coupled into a microcavity filled by nematic liquid crystal material. The DLCMAs proposed in this paper present excellent divergence and convergence performances only loading relatively low driving voltage signal. The common optical properties of the micro-optics-structures are given experimentally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we design and fabricate a kind of liquid crystal microlens arrays (LCMAs) with patterned electrodes made of monolayer graphene, which is grown on copper sheet by chemical vapor deposition (CVD). Graphene is the first two-dimensional atomic crystal. It uniquely combines extreme mechanical strength, high optically transmittance from visible light to infrared spectrum, and excellent electrical conductivity. These properties make it highly attractive for various applications in photonic devices that require conductive but transparent thin films. The graphene-based LCMAs have shown excellent optical performances in the tests. By adjusting the voltage signal loaded over the graphene-based LCMAs, the point spread functions (PSF) and focusing images of incident laser beams with different wavelengths, could be obtained. At the same time, we also get the focusing images of the common ITO-based LCMAs under the same experimental conditions to discuss the advantages and disadvantages between them. Further, the graphene-based LCMAs are also used in visible imaging. During the imaging tests, the graphene electrodes in the LCMAs work well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Enormous pressures have been puts on current optical storage technologies as the rapid development of information technologies. Recently, it has been found that the surface plasmon–polaritons'modes (SPPMs) in metallic nanostructures may lead to the high localization of guided light beams with nanometer size and only limited by several factors such as atomic structure, dissipation, and light dispersion, and thus far beyond the common diffraction limit of electromagnetic waves in dielectric media. This discovery provides a way to produce nanoscale light signal and thus makes a significant breakthrough in optical storage technologies. In this paper, our work focuses on the modeling and simulation of particular kinds of patterned metal-based nanostructure fabricated over silicon dioxide (SiO2) wafer. The nanostructures designed are expected to concentrate, deliver incident light energy into nanoscale regions and generate nanoscale light signal. In our research, the duty cycle of patterned nanostructures is taken as a key parameter, and then the factors including the patterned nanostructures, the frequency of the incident electromagnetic wave, the size of patterned nanostructure and the distance arrangement between adjacent single patterns, are taken as variables. The common CST microwave studio is used to simulate beam transportation and transformation behaviors. By comparing electric-field intensity distribution in nano-areas and the reflectance of the nanostructure array, the nano-light-emission effects are analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on our previous works in liquid-crystal microlens arrays (LCMAs), a new kind of optical switches using the 24×24 fiber arrays coupled with the LCMAs, which have a key dual-mode function of the switches about on and off state and work in visible and infrared range, is proposed and fabricated in this paper. Different with other common LCMAs, this new kind of dual-mode LCMAs includes two layers of control electrodes deposited directly over the surface of the top glass substrate in LC microcavity fabricated. The first layer is the patterned electrode, which is designed into basic circular holes with suitable diameter, and the second is the planar electrode. Both layered electrodes are effectively separated by a thin SiO2 film with a typical thickness of about several micrometers, and then the dual-mode microlenses are driven by applied electrical signals with different root mean square (rms) voltage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital color image reproduction based on spectral information has become a field of much interest and practical importance in recent years. The representation of color in digital form with multi-band images is not very accurate, hence the use of spectral image is justified. Reconstructing high-dimensional spectral reflectance images from relatively low-dimensional camera signals is generally an ill-posed problem. The aim of this study is to use the Principal component analysis (PCA) transform in spectral reflectance images reconstruction. The performance is evaluated by the mean, median and standard deviation of color difference values. The values of mean, median and standard deviation of root mean square (GFC) errors between the reconstructed and the actual spectral image were also calculated. Simulation experiments conducted on a six-channel camera system and on spectral test images show the performance of the suggested method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data fusion using subbands, which can obtain a higher range resolution without altering the bandwidth, hardware, and sampling rate of the radar system, has attracted more and more attention in recent years. A method of ISAR imaging based on subbands fusion and high precision parameter estimation of geometrical theory of diffraction (GTD) model is presented in this paper. To resolve the incoherence problem in subbands data, a coherent processing method is adopted. Based on an all-pole model, the phase difference of pole and scattering coefficient between each sub-band is used to effectively estimate the incoherent components. After coherent processing, the high and low frequency sub-band data can be expressed as a uniform all-pole model. The gapped-data amplitude and phase estimation (GAPES) algorithm is used to fill up the gapped band. Finally, fusion data is gained by high precision parameter estimation of GTD-all-pole model with full-band data, such as scattering center number, scattering center type and amplitude. The experimental results of simulated data show the validity of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Superoscillation is a novel super-resolution method based on the propagation wave instead of
evanescent waves and its limitation is increasing resolution with huge sidelobes. We developed a
super-oscillation function filter based on Chebyshev linear array, which overcame the diffraction
limit with lower side lobes in 4F system. It was proved that value of MSE and PSNR of single
point and multipoint samples imaging reduced a half.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image quality assessment has been of major importance for several domains of the industry of image as for instance restoration or communication and coding. New application fields are opening today with the increase of embedded power in the camera and the emergence of computational photography: automatic tuning, image selection, image fusion, image data-base building, etc.
We review the literature of image quality evaluation. We pay attention to the very different underlying hypotheses and results of the existing methods to approach the problem. We explain why they differ and for which applications they may be beneficial. We also underline their limits, especially for a possible use in the novel domain of computational photography. Being developed to address different objectives, they propose answers on different aspects, which make them sometimes complementary. However, they all remain limited in their capability to challenge the human expert, the said or unsaid ultimate goal.
We consider the methods which are based on retrieving the parameters of a signal, mostly in spectral analysis; then we explore the more global methods to qualify the image quality in terms of noticeable defects or degradation as popular in the compression domain; in a third field the image acquisition process is considered as a channel between the source and the receiver, allowing to use the tools of the information theory and to qualify the system in terms of entropy and information capacity.
However, these different approaches hardly attack the most difficult part of the task which is to measure the quality of the photography in terms of aesthetic properties. To help in addressing this problem, in between Philosophy, Biology and Psychology, we propose a brief review of the literature which addresses the problematic of qualifying Beauty, present the attempts to adapt these concepts to visual patterns and initiate a reflection on what could be done in the field of photography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the highly effective methods of operative remote environmental monitoring on land and water surfaces is laser sensing. It knew that the Raman scattering cross section is very small (10-25-10-27), so in some cases radiation back into captivity to the target could be a few tens of photons. For high-speed sensing, speed of processing and ease of use lidar units required for the use of appropriate hardware and software systems used for the decision of tasks of collecting, processing, storing, organizing large amounts of data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral images belong to high-dimensional data having a lot of redundancy information when they are directly used to classification. Support vector machine (SVM) can be employed to map hyperspectral data to high dimensional space effectively and make them linearly separable. In this paper, spectral and spatial information of hyperspectral images were used to construct SVM kernel function respectively. This paper proposed a hyperspectral image classification method utilization spatial-spectral combined kernel SVM in order to improve classification accuracy. The proposed method was used to classify AVIRIS hyperspectral images. The results demonstrated that the proposed SVM method can achieve 96.13% overall accuracy for the single category classification and 84.81% overall accuracy for multi-class classification only using ten percent of the total samples as the training samples. That is to say, the proposed method can make full use of the spectral information and spatial information of hyperspectral data, and effectively distinguish different categories compared with the traditional SVM for classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The images obtained by large aperture static imaging spectrometer (LASIS) are two dimensional images which contain interference information. The acquisition of different places’ interferogram needs to pushbroom the whole field of view. Because of the instability of the pushbroom platform, the original LASIS image registration is required, and the interferogram after registration is no longer consistent with equal interval interference. The spectral information will be severely distorted while a fast Fourier transform (FFT) is used for the direct extraction of the interferogram. In view of the above problems, this paper introduces a method based on phase correlation, and proposes an adaptive fast interpolation algorithm. Experimental results show that the proposed method can recover the spectral information well when the registration accuracy is sufficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The method of feature-based registration has been successful applied in registration of multi-source remote sensing images. Unfortunately, the mismatching still exists due to the complex textures, spectrum variation, nonlinear distortion and the large scale change. In this paper, we proposed a novel feature point matching method of multi-source remote sensing images. Firstly, the Fast-Hessian detector is to extract the feature points which are described by the SURF descriptor in the following step. After that, we analyze the local neighborhood structures of the feature points, and formulate point matching as an optimization problem to preserve local neighborhood structures. The shape context distances of the feature points are utilized to initialize matching probability matrix. Then relaxation labeling is adopted to update the probability matrix and refine the matching, which is aimed to maximize the value of the object function deduced based on preserving local neighborhood structures. Subsequently, the mismatching elimination method based on affine transformation and distance measurement is used to eliminate the residual mismatching points. During the abovementioned matching produce, the multi-resolution analysis method is adopted to decrease the scale difference between the multi-source remote sensing images. Also the mutual information method is utilized to match the feature points of the down sampling and the original images. The experimental results are shown that the proposed method was robust and efficient for registration of multi-source remote sensing images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article addresses the image denoising problem in the situations of strong noise. The method we propose is intended to preserve faint signal details under these difficult circumstances. The new method we introduce, called principal basis analysis, is based on a novel criterion: the reproducibility which is an intrinsic characteristic of the geometric regularity in natural images. We show how to measure reproducibility. Then we present the principal basis analysis method, which chooses, in sparse representation of the signal, the components optimizing the reproducibility degree to build a so-called principal basis. With this principal basis, we show that a noise-free reconstruction may be obtained. As illustrations, we apply the principal signal basis to image denoising for natural images with details in low signal-to-noise ratio, showing performance better than some reference methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a unified way to estimate the parameters of affine transformation in the absence of original image. With 2-D cyclostationary characterization, we analytically show that the covariance of affine transformed image is periodic with the affine transformation matrix. Based on the relationship between the affine transformation matrix and the position of resampling-caused striking peaks in the 2-D spectrum of the image’s edge map, we further study how to estimate the parameters of several typical affine transformations, e.g., the scaling factor, the rotation angle, the joint scaling and rotation transformation parameters. Examples of the output of our algorithm are shown and comparative results are presented to evaluate the performance of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edges and contours of an object contain a lot of information, so the detection and extraction of saliency edges and contours in the image become one of the most active issues in the research field of automatic target recognition. Weak edge enhancement plays an important role in contour detection. Based on psychophysical and physiological findings, a contour detection method which focuses on weak edge enhancement and inspired by the visual mechanism in the primary visual cortex (V1) is proposed in this paper. The method is divided in three steps. Firstly, the response of every single visual neuron in V1 is computed by local energy. Secondly, the local contrast which corresponds to the CRF is computed. If the local contrast in the image is below the low contrast threshold, expand NCRF to change the spatially modulatory range by increasing the NCRF radius. Thirdly, the facilitation and suppression (the contextual influence) on a neuron through horizontal interactions are obtained by using a spatially unified modulating function. We tested it on synthetic images and encouraging results were acquired.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to traditional methods of image segmentation on sonar image processing with less robustness and the problem of low accuracy, we propose the method of sonar image segmentation based on Tree-Structured Markov Random Field(TS-MRF), the algorithm shows better ability in using spatial information. First, using a tree structure constraint two-valued MRF sequences to model sonar image, through the node to describe local information of image, hierarchy information establish interconnected relationships through nodes, at the same time when we describe the hierarchical structure information of the image, we can preserve an image’s local information effectively. Then, we define split gain coefficients to reflect the ratio that marking posterior probability division before and after the splitting on the assumption of the known image viewing features, and viewing gain coefficients of judgment as the basis for determining binary tree of node split to reduce the complexity of solving a posterior probability. Finally, during the process of image segmentation, continuing to split the leaf nodes with the maximum splitting gain, so we can get the splitting results. We add merge during the process of segmentation. Using the methods of region splitting and merging to reduce the error division, so we can obtain the final segmentation results. Experimental results show that this approach has high segmentation accuracy and robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially
granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels,
noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such
mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images,
which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by
first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other
neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is
controlled by image brightness. The experimental results show that the proposed method obviously outperforms some
other representative denoising methods in terms of both objective measure and visual evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The binarized image is very critical to image visual feature extraction, especially shape feature, and the image binarization approaches have been attracted more attentions in the past decades. In this paper, the genetic algorithm is applied to optimizing the binarization threshold of the strip steel defect image. In order to evaluate our genetic algorithm based image binarization approach in terms of quantity, we propose the novel pooling based evaluation metric, motivated by information retrieval community, to avoid the lack of ground-truth binary image. Experimental results show that our genetic algorithm based binarization approach is effective and efficiency in the strip steel defect images and our quantitative evaluation metric on image binarization via pooling is also feasible and practical.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase unwrapping is a key step in InSAR (Synthetic Aperture Radar Interferometry) processing, and its result may directly affect the accuracy of DEM (Digital Elevation Model) and ground deformation. However, the decoherence phenomenon such as shadows and layover, in the area of severe land subsidence where the terrain is steep and the slope changes greatly, will cause error transmission in the differential wrapped phase information, leading to inaccurate unwrapping phase. In order to eliminate the effect of the noise and reduce the effect of less sampling which caused by topographical factors, a weighted least-squares method based on confidence level in frequency domain is used in this study. This method considered to express the terrain slope in the interferogram as the partial phase frequency in range and azimuth direction, then integrated them into the confidence level. The parameter was used as the constraints of the nonlinear least squares phase unwrapping algorithm, to smooth the un-requirements unwrapped phase gradient and improve the accuracy of phase unwrapping. Finally, comparing with interferometric data of the Beijing subsidence area obtained from TerraSAR verifies that the algorithm has higher accuracy and stability than the normal weighted least-square phase unwrapping algorithms, and could consider to terrain factors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to improve the classification accuracy, quotient space theory was applied in the classification of polarimetric SAR (PolSAR) image. Firstly, Yamaguchi decomposition method is adopted, which can get the polarimetric characteristic of the image. At the same time, Gray level Co-occurrence Matrix (GLCM) and Gabor wavelet are used to get texture feature, respectively. Secondly, combined with texture feature and polarimetric characteristic, Support Vector Machine (SVM) classifier is used for initial classification to establish different granularity spaces. Finally, according to the quotient space granularity synthetic theory, we merge and reason the different quotient spaces to get the comprehensive classification result. Method proposed in this paper is tested with L-band AIRSAR of San Francisco bay. The result shows that the comprehensive classification result based on the theory of quotient space is superior to the classification result of single granularity space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a CCD sensor device is used to record the distorted homemade grid images which are taken by a wide angle camera. The distorted images are corrected by using methods of position calibration and correction of gray with vc++ 6.0 and opencv software. Holography graphes for the corrected pictures are produced. The clearly reproduced images are obtained where Fresnel algorithm is used in graph processing by reducing the object and reference light from Fresnel diffraction to delete zero-order part of the reproduced images. The investigation is useful in optical information processing and image encryption transmission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, the gain cell based eDRAM has got more and more interest for high density and logic-compatible embedded memories. It is widely used in image processing and biomedical applications as it can work as a dual-port memory with a small area. A hidden refresh scheme is proposed for the dual-port gain cell eDRAM to avoid the conflict between accesses and internal refreshes and to increase the data efficiency. By dividing the read, write and refresh operations into several stages, a hidden refresh controller controls to perform the dual-port access and internal refresh in parallel without any conflict. The hidden refresh scheme is integrated into a dual-port gain cell eDRAM of 256X256 in SMIC 130nm logic process. Simulation results demonstrate that the external accesses are performed without delay and dual-port data availability can reach 100%. And the access cycle time is only increased by about 10.9% compared with traditional distribution refresh method. The refresh power of the eDRAM is about 60μW/Mbit at 85°C.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper , we choose four different variances of 1,3,6 and 12 to conduct FPGA design with three kinds of Gaussian filtering algorithm ,they are implementing Gaussian filter with a Gaussian filter template, Gaussian filter approximation with mean filtering and Gaussian filter approximation with IIR filtering. By waveform simulation and synthesis, we get the processing results on the experimental image and the consumption of FPGA resources of the three methods. We set the result of Gaussian filter used in matlab as standard to get the result error. By comparing the FPGA resources and the error of FPGA implementation methods, we get the best FPGA design to achieve a Gaussian filter. Conclusions can be drawn based on the results we have already got. When the variance is small, the FPGA resources is enough for the algorithm to implement Gaussian filter with a Gaussian filter template which is the best choice. But when the variance is so large that there is no more FPGA resources, we can chose the mean to approximate Gaussian filter with IIR filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on the study of implementing feature-based image registration by System on a Programmable Chip (SoPC) hardware platform. We solidify the image registration algorithm on the FPGA chip, in which embedded soft core processor Nios II can speed up the image processing system. In this way, we can make image registration technology get rid of the PC. And, consequently, this kind of technology will be got an extensive use. The experiment result indicates that our system shows stable performance, particularly in terms of matching processing which noise immunity is good. And feature points of images show a reasonable distribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To improve the capacity and imperceptibility of image steganography, a novel high capacity and imperceptibility image steganography method based on a combination of framelet and compressive sensing (CS) is put forward. Firstly, SVD (Singular Value Decomposition) transform to measurement values obtained by compressive sensing technique to the secret data. Then the singular values in turn embed into the low frequency coarse subbands of framelet transform to the blocks of the cover image which is divided into non-overlapping blocks. Finally, use inverse framelet transforms and combine to obtain the stego image. The experimental results show that the proposed steganography method has a good performance in hiding capacity, security and imperceptibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target detection is a challenging task as the response from an underwater target may vary greatly depending on its configuration, sonar parameters and the environment. We propose a Z- test algorithm for target detection in side scan sonar image which avoids this problem that covers the variation in the target response. A Z-test is performed on the means of the pixel gray levels within and outside the window area, a detection being called when the value of test statistic feature exceeds a certain threshold. The algorithm is formulated for real-time execution on limited memory commercial-of-the-shelf platforms and is capable of detection objects on the seabed-bottom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The relative problems’ research of chilled meat, thawed meat and spoiled meat discrimination by hyperspectral image technique were proposed, such the section of feature wavelengths, et al. First, based on 400 ~ 1000nm range hyperspectral image data of testing pork samples, by K-medoids clustering algorithm based on manifold distance, we select 30 important wavelengths from 753 wavelengths, and thus select 8 feature wavelengths (454.4, 477.5, 529.3, 546.8, 568.4, 580.3, 589.9 and 781.2nm) based on the discrimination value. Then 8 texture features of each image under 8 feature wavelengths were respectively extracted by two-dimensional Gabor wavelets transform as pork quality feature. Finally, we build a pork quality classification model using the fuzzy C-mean clustering algorithm. Through the experiment of extracting feature wavelengths, we found that although the hyperspectral images between adjacent bands have a strong linear correlation, they show a significant non-linear manifold relationship from the entire band. K-medoids clustering algorithm based on manifold distance used in this paper for selecting the characteristic wavelengths, which is more reasonable than traditional principal component analysis (PCA). Through the classification result, we conclude that hyperspectral imaging technology can distinguish among chilled meat, thawed meat and spoiled meat accurately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital audio watermarking embeds inaudible information into digital audio data for the purposes of copyright protection, ownership verification, covert communication, and/or auxiliary data carrying. In this paper, we present a novel watermarking scheme to embed a meaningful gray image into digital audio by quantizing the wavelet coefficients (using integer lifting wavelet transform) of audio samples. Our audio-dependent watermarking procedure directly exploits temporal and frequency perceptual masking of the human auditory system (HAS) to guarantee that the embedded watermark image is inaudible and robust. The watermark is constructed by utilizing still image compression technique, breaking each audio clip into smaller segments, selecting the perceptually significant audio segments to wavelet transform, and quantizing the perceptually significant wavelet coefficients. The proposed watermarking algorithm can extract the watermark image without the help from the original digital audio signals. We also demonstrate the robustness of that watermarking procedure to audio degradations and distortions, e.g., those that result from noise adding, MPEG compression, low pass filtering, resampling, and requantization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral data sets with high spatial resolution have been widely used in the research of image classification. The methodology based on the mathematical morphology, which aims at extracting the structure of hyperspectral images, has been implemented. In this method, opening and closing morphological operation used in hyperspectral data in order to retain spatial information of objects. Morphological profiles are established based on opening and closing transforms with structure element of different size. The proper definition of structure elements is the key of extracting morphological features for the image classification. Which kind of structural element is better for image classification. Can be discussed the classification results by setting different structural element to extract morphological features. In the experiments, we have defined four types of structure element to extract spatial features. Later on, the features are fed into a support vector machine (SVM) classifier respectively. The influence of different types of structure elements is judged by the classification accuracy. The experiment results illustrates disk shape of structural element is superior to diamond shape of structural element and the structural elements of large radius is better than the structural elements of small radius.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In wide angel CCD imaging system, for the large distance difference between every object point and the according imaging point, the acquired image intensity is non-uniform. In this paper, to reduce the intensity non-uniformity influence, a pixel gray value correction method based on illuminance and irradiance theory is proposed. The irradiance relation between the object point on-axis and the object point off-axis is analyzed, and a compensation formula is deduced to correct the image intensity. The proposed method is experimented on plenty of images, the results show the effectiveness and robustness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Earthquake is one of the major natural disasters in the world. Since the twentieth century, it caused a large number of casualties and lots of direct economic losses. With the advantage of wide-coverage, high spatial-temporal resolution, remote sensing technology has been used for residential distribution monitoring of different earthquake intensity. In this paper, based on interpretation of GF-1 remote sensing data, Digital Elevation Model (DEM), reference image, earthquake intensity, resident population statistics data, residential distribution analyzing model has been formed which including: GF-1 remote sensing data processing sub-model, residential distribution monitoring sub-model and residential distribution analyzing sub-model. Case analysis in Nie lamu, Ji long, Ding ri in Tebet during Nepal's 8.1 magnitude earthquake shows that: the proposal model has a high precision and could be used in residential distribution monitoring, combined with resident population statistics data, affected population in earthquake intensity influence region can be acquired, quickly assessment the possibly influence degree of earthquake can be qualitative analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of the airborne LiDAR system calibration is to eliminate the influence of system error and improve the precision of the original point cloud data. In certain hypothesis of flight conditions, the directly positioning model for LiDAR can be reduced to a quasi-rigorous model, and the dependence on the original observation data for the system calibration model is reduced too. In view of the shortcoming of human interaction way to establish corresponding relationship between strips, an improved ICP method which considering the object features in point clouds is proposed to get the transform relationship between strips, and the automatic calibration procedures of LiDAR system is established in this paper. Taking with the real LiDAR data in Baotou test field, experiment results show that the proposed system calibration procedures can greatly eliminate the influence of system error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method to video conference quality assessment, which is based on cooperative sensing of video and audio. In this method, a proposed video quality evaluation method is used to assess the video frame quality. The video frame is divided into noise image and filtered image by the bilateral filters. It is similar to the characteristic of human visual, which could also be seen as a low-pass filtering. The audio frames are evaluated by the PEAQ algorithm. The two results are integrated to evaluate the video conference quality. A video conference database is built to test the performance of the proposed method. It could be found that the objective results correlate well with MOS. Then we can conclude that the proposed method is efficiency in assessing video conference quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Endmember selection is the key to success in pixel unmixing which plays an important role in urban impervious surface abundance extraction. During the extraction, however, there has been a problem for the discrimination of impervious surfaces and soils because of their similarity in spectral. This increase the difficulty in distinguishes impervious surface and soil in endmember selection. To address this issue, in the current study, the biophysical composition index (BCI) and soil adjusted vegetation index (SAVI) were introduced to enhance the information of impervious surface and bare soil in the study area. Then, by selecting high albedo, low albedo, soil and vegetation endmembers with the utilization of the histogram of the indices and minimum noise fraction (MNF) scatter plot, we applied spectral mixture analysis (SMA) to extract impervious surface abundance. The scene of multispectral Landsat TM image was acquired allowing for the interpretation and analysis of impervious surfaces distribution. Experiments and comparisons indicate that this method performs well in estimating subpixel impervious surface distribution with relatively high precision and small bias.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposed a novel filter scheme by image fusion based on Nonsubsampled ContourletTransform(NSCT) for multispectral image. Firstly, an adaptive median filter is proposed which shows great advantage in speed and weak edge preserving. Secondly, the algorithm put bilateral filter and adaptive median filter on image respectively and gets two denoised images. Then perform NSCT multi-scale decomposition on the de-noised images and get detail sub-band and approximate sub-band. Thirdly, the detail sub-band and approximate sub-band are fused respectively. Finally, the object image is obtained by inverse NSCT. Simulation results show that the method has strong adaptability to deal with the textural images. And it can suppress noise effectively and preserve the image details. This algorithm has better filter performance than the Bilateral filter standard and median filter and theirs improved algorithms for different noise ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compared with wavelet, framelet has good time frequency analysis ability and redundant characteristic. SVD (Singular Value Decomposition) can obtain stable feature of images which is not easily destroyed. To further improve the watermarking technique, a robust digital watermarking algorithm based on framelet and SVD is proposed. Firstly, Arnold transform is implemented to the grayscale watermark image. Secondly perform framelet transform to each host block which is divided according to the size of the watermark. Then embed the scrambled watermark into the biggest singular values produced in SVD transform to each coarse band gained from framelet transform to host image block. At last inverse framelet transform after inverse SVD transform to obtain embedded coarse band. Experimental results show that the proposed method gains good performance in robustness and security compared with traditional image processing including noise attack, cropping, filtering and JPEG compression etc. Moreover, the watermark imperceptibility of our method is better than that of wavelet and has stronger robustness than pure framelet without SVD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the fine-structures of lightning electromagnetic pulse (LEMP) including 19 pulses in preliminary breakdown, 37 stepped leaders, 8 dart leaders, 73 first return strokes, and 52 subsequent return strokes have been analyzed based on Laplace wavelet. The main characteristics of field waveforms are presented: the correlation coefficient, the dominant frequency, the peak energy and the spread distribution of the power spectrum. The instantaneous field peak pulse can be precisely located by the value of the correlation coefficient. The pulses of preliminary breakdown and leaders are found to radiate in the dominant frequency in the range 100 kHz to 1 MHz. The field radiated by the first return strokes dominantly lies under 100 kHz, whereas the subsequent return strokes under 50 kHz. The statistical results show that the Laplace wavelet is effective and can accurately determine time and frequency of the electromagnetic field of first and subsequent return strokes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fractional Fourier transform (FRFT), which is a generalization of the classical Fourier transform (FT), plays an important role in many areas of signal processing and optics. Many properties of this transform are well known. In the field of signal processing, the chirp signal has a good energy concentration in the fractional Fourier domain (FRFD) by choosing an appropriate fractional order, but the study of the fractional energy spectrum integral (FESI) is still missing. The purpose of this paper is to derive the FESI of the FRFT of chirp signal, from which an important property of the chirp signal’s FRFT is discovered that the FESI reaches the valley value at the rotation angle where the FRFT reaches the peak value, and this provides a new approach to detect and estimate the parameter of the chirp signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The presence of range and azimuth (or Doppler) ambiguities in synthetic aperture radars (SARs) is well known. The ambiguity noise is related to the antenna pattern and the value of pulse repetition frequency (PRF). Because a new frequency modulated continuous wave (FMCW) SAR has the characters of low cost and small size, and the capacity of real-time signal processing, the antenna will likely vibrate or deform due to a lack of the stabilized platform. And the value of PRF cannot be much high because of the high computation burden for the real-time processing. The aim of this study is to access and improve the performance of a new FMCW SAR system based on the ambiguity noise. First, the quantitative analysis of the system’s ambiguity noise level is performed; an antenna with low sidelobes is designed. The conclusion is that the range ambiguity noise is small; the azimuth ambiguity noise is somewhat increased, however, it is sufficiently small to have marginal influence on the image quality. Finally, the ambiguity noise level is measured using the imaging data from a Ku-band FMCW SAR. The results of this study show that the measured noise level coincides with the theoretical noise level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The micro-Doppler signature is one of the most prominent information for target classification and identification. As Hough transform (HT) is an efficient tool for detecting weak straight target traces in the image, an HT based algorithm is proposed for micro-Doppler signature separation of multiple persons. Few seconds data is processed at one time to ensure human motion traces approximate to straight lines in the radar slow time-range image. Taking HT to the slow time-range image, each human’s motion trace can be recovered through recursively searching the peaks in HT space. Applying time-frequency transform to the range cells around each recovered line, the human micro-Doppler signature can be achieved and separated. Experimental results are given to illustrate the validity of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.