|
1.IntroductionSpectral imaging sensors are regularly implemented in remote sensing environments.1–3 A comprehensive review of existing snapshot hyperspectral imaging sensors is provided in Refs. 4, 5. Generally, these sensors represent a diverse array of optical methodologies with complex spatial, spectral, and temporal resolution trade spaces. In this work, we make improvements to the spectral resolution trade-space of a snapshot Fourier transform imaging spectrometer6 through the implementation of polarization grating (PG)-based spatial heterodyning.7 While past work represents contributions to this trade-space, the narrow-line imaging spectrometer (NLIS) developed in this work demonstrates nanometer-scale spectral resolution, and is the first snapshot imaging spectrometer that leverages Savart plates. High spectral resolution imaging spectrometers are often implemented for increased discrimination and the ability to monitor atomic transitions.8 Such systems are generally implemented using dispersive or grating elements.9 In the developed system, high spectral resolution is achieved using a field-widened calcite Savart plate.10,11,12 While Savart plate imaging spectrometry has been demonstrated in the past,13,14 snapshot imaging architectures have not been previously described. Additionally, for the first time, spatial heterodyning of a Savart plate’s interference fringes using PGs is demonstrated.6,15–17 Spatial heterodyning improves the signal-to-noise ratio (SNR) of the system by reducing the frequency of the Savart plate’s interference fringes. When heterodyning high frequency interferograms to a lower frequency, the contrast of the fringes is improved as a result of the sensor’s MTF, thereby increasing the measured SNR.7 Additionally, the NLIS system is experimentally demonstrated for target identification using the acquired interferograms directly, without calibrating the interferograms to the spectral domain. This process was implemented using both expectation maximization and neural networks, and the target discrimination performance of each was compared. As a basis for comparison, data from an outdoor scene containing closely spaced spectral lines from a model rocket engine were used. Similar to recent work,18 the neural network approach provides superior results. Specifically, it is demonstrated that the neural network approach eliminates crosstalk, which the expectation maximization approach suffers from. For additional sensor validation, a conventional spectral calibration is performed using expectation maximization. This method is then validated on a laboratory scene. This paper is organized as follows: a detailed model along with experimental parameters of the NLIS sensor are presented in Sec. 2, followed by calibration methodology in Sec. 3, and results in Sec. 4. 2.System Design and Theoretical ModelUnique to this implementation is the high spectral resolution of the NLIS sensor, which is attained using a Savart plate interferometer (SPI).13,19 To reduce the frequency of the SPI’s fringes, PGs are implemented for spatial heterodyning.7,15 To develop a model for the NLIS sensor, it is advantageous to first consider its constituent components independently. In Fig. 1(a), a conventional SPI is illustrated, along with a PG interferometer in Fig. 1(b). The SPI in Fig. 1(a) consists of a reimaging lens with focal length , two beam displacers with fast axes at plus () and minus () 45 deg in the plane, respectively, separated by a half-wave plate () with fast axis at 45 deg in the plane, between a () generating and () analyzing linear polarizer at 45 deg. Polarized light from enters the first beam displacer and is split into an ordinary (O) and an extraordinary (E) ray. The split beams then pass through , where they are rotated to their orthogonal linear state. Following rotation, each beam is then passed through where the vertical state is refracted in the direction, to produce EO and OE rays. Exiting the Savart plate apparatus are two collimated beams separated by a shear10 where and are the extraordinary and ordinary indices of refraction, respectively, and is the thickness of the beam displacers. When combined with the reimaging lens and focused on a focal plane array (FPA), the measured interference has the form where is the incident intensity and is the wavelength of light. Meanwhile, the PG interferometer is considered in Fig. 1(b). This setup consists of two PGs with orthogonal grating vectors between two parallel linear polarizers at 45 deg. In this configuration, light transmitted by becomes linearly polarized and transmits to , where it is diffracted into right- and left-circular polarization states. These diffracted rays diverge for a distance , before encountering where they are retrodiffracted and emerge collimated, generating a chromatic shear of where is the period of the PGs. When these sheared rays are focused using a lens with focal length , interference fringes are generated with intensity profileThus, interference fringes are produced that have no wavelength dependence, making the stacked PGs ideal for introducing a wavelength-dependent offset to perform heterodyning. Combining the systems from Figs. 1(a) and 1(b) yields the NLIS’s operational concept. This is shown in Fig. 2 which demonstrates a heterodyned SPI. As shown in Fig. 2, the sheared beams from the Savart plate encounter a quarter-wave plate (QWP), where they are converted from vertical and horizontal polarization states into right and left circular polarization states, respectively. This polarization conversion prevents the states from being split again when encountering the PGs. These sheared circularly polarized beams then encounter the PG group, where they are diffracted toward the optical axis. Interference generated with this system can be modeled as where the heterodyne wavenumber occurs when the frequency of the fringes is zero, and is defined asThe final system is shown in Fig. 3 and follows directly from the heterodyned SPI in Fig. 2. In this final implementation, the shearing phenomenon from the PGs and the Savart plate is occurring simultaneously, instead of consecutively as shown in Fig. 2. Additionally, to extend this methodology to division of aperture, snapshot imaging spectrometry,20 the interferometer system is now considered with imaging components. A fore optic couples light into the lenslet array,21 where it is then collimated and passed into the interferometer. Diffracted light from and is converted from circular to linear states by , where it is then sheared by the Savart plate elements. Linearly polarized, sheared beams exiting the Savart plate are converted to right and left circular polarization states via , which are then retrodiffracted by and , emerging parallel to the optical axis. Finally, the circular states exiting the second PG grouping transmit through an analyzing polarizer . These beams are then focused by a reimaging lens, with focal length , to produce a focal plane that is coincident with the fringe localization plane. An additional attribute of the final system is that it incorporates two sets of PGs, adding an additional degree of freedom for tuning the grating period to adjust for tolerancing errors in the beam displacers. Placing two PGs in direct contact, and counter rotating them, enables us to modify the effective grating period such that where and are the periods of each grating, and is the angle between their grating vectors.22Assuming all four PGs have the same period , and the angle between and is equal to the angle between and , the interference on the detector has the distribution where the heterodyne wavenumber can be calculated as3.Experimental SetupBased on the design shown in Fig. 3, a system was constructed around an Allied Vision Technologies GX2750, utilizing a 6-megapixel Sony ICX694 FPA. In the system, shown in Fig. 4, the fore optic consists of a Nikkor F/1.2 50-mm focal length objective that focuses light onto a field stop coincident with a fiber faceplate, which is used to eliminate parallax. Light from the faceplate is collimated by a 50-mm achromatic doublet consisting of a Thorlabs AC254-050-B () into the lenslet array, which was constructed from two stacked arrays with 1.5-mm focal lengths each. The subimages, formed by the lenslet array, were collimated by a Nikkor F/1.4 50 mm collimation lens (). An Omega Optical 50-mm-diameter bandpass filter, with 10% transmission cutoffs at 763 and 775 nm, was used to limit the spectral pass band. Light then transmits through the PGs and Savart plate, where it undergoes beam displacement. Lastly, the reimaging lens is a Nikkor F/1.4 58 mm focal length lens. This experimental configuration creates subimages that are , which is equal to the spatial resolution of the reconstructed data. In this system, the Savart plate was constructed from two calcite beam displacers, each with a thickness of . The high spectral resolution of the system is a manifestation of the Savart plate’s thickness, reimaging lens focal length, and lateral extent of the detector, where the full-width at half maximum (FWHM) spectral resolution can be calculated as where and is the FPA’s maximum sampling distance from the optical axis. For the Sony ICX694, , accounting for the coordinate system’s rotation. For calcite at , , and .23 From this, the resolution can be calculated as or .Finally, the orientations of the polarization elements’ Eigen or grating vectors have been referenced to the Savart plate’s fast axis (or shearing direction), which is oriented at an angle from the global -axis. For an lenslet array, is given as20 For the lenslet array used, . Thus, and have a grating vector oriented at 32.6 deg while and have grating vectors oriented at , while all four PGs were fabricated with a period of . Utilizing Eq. (7), the effective grating period . Finally, the spacing between the PG groupings .Lastly, a summary of the optical performance parameters is provided in Table 1. Table 1Optical performance parameters for the narrow line imaging spectrometer.
4.CalibrationWith the constructed NLIS sensor, two means of spectral calibration were pursued. One approach used linear unmixing-based and neural network-based methods for direct target identification using interferogram data. Alternatively, a conventional spectral calibration using expectation maximization was performed for additional sensor validation. 4.1.Target IdentificationInitially, instead of transforming the system’s measurements to the spectral domain, acquired data were used directly for target identification. The system’s interferograms are unique to the spectral profile of the scene, and thus it is possible to discriminate scene objects based on their spectral distribution. Similar approaches, such as the “smashed filter” in compressive sensing, also leverage the multiplexed sensor measurements for similar purposes.24 This process is guided by linear unmixing given by where is a systems matrix containing the basis interferograms, is an interferogram from an unknown scene, and is a vector containing abundance coefficients for the basis interferograms.As a consequence of the narrow band nature of the system, only two principle components are required, , the narrow line or target component, and , the background component. Using this methodology, Eq. (12) can be expanded to where and are the mixing, or abundance coefficients,25 for the narrow line and background interferograms, respectively. This methodology assumes that the target and background adequately represent the spectral content of unknown scenes. While there may be concern that the narrow line and background images may provide an incomplete basis, particularly due to other lines in the waveband, such lines are not expected in environments of interest. In the event additional lines become a concern, such shortcomings could easily be accommodated for by adding additional basis images.To obtain the matrix for single-target detection, three integrating sphere images were required, a flatfield image, an image from a tungsten-halogen source, and an image of a high pressure sodium (HPS) source. The raw data frames for each image are shown in Fig. 5, in addition to the setup for obtaining each image in Fig. 6. Additionally, to ensure similarity of the HPS lamp to the desired target source, the potassium lines in the lamp were measured using a high-resolution optical spectrum analyzer. This spectrum is shown in Fig. 7. As shown in Fig. 7, the lines have FWHM of 0.1 nm, measured using Gaussian fitting, which is well below the resolution of the sensor. Thus, the effect of high-pressure line broadening is negligible. To acquire calibration images of the tungsten or HPS images, light from either lamp was directed into an integrating sphere using a fiber. Additionally, a flatfield was constructed using two images of the tungsten source where is rotated 90 deg between each image to produce two frames with fringes 180 deg out of phase. The two flatfield images were averaged to produce a single fringeless frame as shown in Fig. 5(a). Using the images in Fig. 5, each basis image was divided by the flatfield image to remove the influence of vignetting. Following flatfield division, static image registration coefficients were applied to the tungsten and HPS images to produce two calibration datacubes. The tungsten and HPS datacubes were used to generate and at each object point, respectively. These two datacubes were the basis of the neural network training methodology as shown in Fig. 8. In this context, target detection is commonly accomplished by solving Eq. (12) using pseudoinversion such that As an addition to calibrating using pseudoinversion, in this work, Eq. (14) was also solved using expectation maximization and neural networks. A comparison of performance for all three methods is provided in Sec. 5.1. Neural network-based spectral calibration has been performed in past work.6,18 Unique to this work, training data were constructed leveraging random linear superposition of interferograms, instead of measuring random spectral distributions directly. To construct training data, the NLIS sensor’s systems measurement matrix was used as shown in Fig. 8. This was done by generating a series of random abundance vectors in and assembling them into an array to form , which served as output (target) values during training. From this, establishing the input values for training, , was a matter of multiplying by , such that After constructing the training matrices and , the MATLAB® neural network toolbox was used to train a 16-node cascade forward neural network, using scaled-conjugate gradient backpropagation. It should be noted that by default, MATLAB® normalizes the training arrays and to the range .To calibrate the entire field of view, a unique matrix was constructed for each pixel in the field and used to train a corresponding network. For an unknown scene, each interferogram was calibrated by its corresponding network thus producing two abundance images, and . Following this procedure, each network was adapted to take a sensor measurement , from an unknown scene, and produce relative abundance values , for the basis spectra and . Calibrating the raw spectral data in this way enables us to operate directly on the system interferograms to determine the localization of the target. The use of interferograms directly for target identification, with neural networks, represents a unique aspect of this work. Results comparing pseudoinversion, an EM-based technique and the neural network approach are included in Sec. 5.1, but first a traditional spectral calibration with EM is considered. 4.2.Full Spectral CalibrationTo expand the capabilities of the NLIS system beyond single target detection, we present an EM-based conventional spectral calibration. The methodology behind this method is similar to target identification in that calibration is modeled using Eq. (12). However, for the full spectral acquisition, was populated with a series of monochromatic interferograms, through , such that spectra were collected across the sensor’s bandpass. Each monochromatic interferogram was obtained using the monochromator setup shown in Fig. 9, where light from a xenon arc lamp was passed through a monochromator, and monochromatic spectra were passed to an integrating sphere, which was then measured with the NLIS. Using this formalism for the matrix, Eq. (12) can be expanded to where to are the abundance coefficients for the monochromatic interferograms. By solving Eq. (16), each can be determined, and subsequently the spectral distribution of an unknown scene. To produce a spectral image, this process is repeated for each pixel in the scene.Utilizing the setup in Fig. 9, 20 monochromatic images were acquired linearly spaced in wavenumber from 762 to 778 nm, with an FWHM resolution of 0.8 nm. In Fig. 10(a), each raw data frame of the monochromatic integrating sphere images is depicted, along with an example matrix, showing through in Fig. 10(b). Implementing the above methodology, results using 100 iterations of expectation maximization are presented in Sec. 5.2. 5.ResultsFor the NLIS sensor, tests were performed for both full spectral acquisition and target identification. The results for these experiments are discussed in the next two sections. 5.1.Target Detection ResultsTo test the target detection calibration technique, an outdoor scene was constructed using a tungsten-halogen lamp and a model rocket engine, which generates narrow-line spectra. Measurements of the tungsten lamp and the HPS lamp were used as basis measurements. A separate measurement of the rocket combustion was used to verify the spectral similarities between the rocket, and HPS lamp in the sensor spectral band. In this test, the tungsten lamp was aimed directly into the sensor to provide an additional near point source comparable in brightness to the rocket. In the combustion process, the motor generates narrow emission at 766.48 and 769.89 nm. In Fig. 11(a), a visible light photo of the scene is illustrated, along with a panchromatic image from the NLIS sensor as shown in Fig. 11(b). Following data acquisition, the calibration was tested using both the EM and neural network approaches as developed in Sec. 4.1. In Figs. 12 and 13, color fused results of the calibration of a video sequence are shown for the expectation maximization approach and the neural network approach, respectively; the contrast has been stretched to show detail in the color fusion. To further quantify these results, the narrow line channel , generated from pseudoinversion, expectation maximization, and neural networks is considered. To do so, an error metric crosstalk is defined such that where and define the abundance image size, is a windowing function, and are pixel coordinates, and is the peak on rocket abundance value. To eliminate the contribution of the source in the error metric, is defined such thatThis metric assumes there are no sources of narrow line bands anywhere other than at the motor, which for a scene of mostly vegetation is a reasonable assumption. In Fig. 14, the windowed abundance images for each calibration technique are presented. Using the images from Fig. 14, crosstalk is calculated for each method and the results are presented in Table 2. Table 2Crosstalk comparison for each of the three target identification-based calibration methods.
Comparing the three methods demonstrates that the neural network approach is superior in preventing channel crosstalk. Using the neural network approach, a 99.86% and 99.96% decrease in crosstalk is demonstrated when compared to pseudoinversion and expectation maximization, respectively. The color-fused results from the neural network approach showed no narrow spectral line localization except where the rocket engine was located. By comparison, the expectation maximization method, showed multiple regions where narrow line detection is present. In detection, the neural network approach would provide less risk of false-positive signatures. One possible explanation for the increase in performance could be that the neural network is better equipped to reject crosstalk due to stray light. However, this is less likely since all surfaces and elements were either index matched or AR coated, the sensor was contained in matte black housing, and optical grade calcite was used. A more likely explanation could be that detection with raw interferograms results in poor conditioning of the measurement matrix, .6,18 5.2.Full Spectral Calibration ResultsUtilizing the techniques developed in Sec. 4.2, a conventional spectral calibration was performed and a scene with various illumination sources and regions of shadowing was measured. A pictorial representation of the measured scene is shown in Fig. 15. For each of the illumination regions from Fig. 15, the measured spectrum is illustrated along with a panchromatic image of the scene in Fig. 16. From this experiment, it is possible to measure the spectral resolution of the system, and the peaks in Fig. 16 have an FWHM resolution of 1.1 nm, measured using Gaussian fitting, which validates the model. Due to the high spectral resolution nature of the NLIS sensor, it is possible to resolve the narrow lines, spaced 3.41 nm apart. For further analysis, the HPS spectra measured using the NLIS sensor, and an Ocean Optics HR4000 spectrometer is compared to NIST data in Fig. 17. Taking the NIST potassium lines with 1.1-nm FWHM resolution as truth, RMS errors of 0.1461 and 0.1754 are realized with the NLIS, and the Ocean Optics spectrometer, respectively. As another metric, the absolute spacing between the dual peaks of the spectra is compared as obtained by the NLIS, and as documented in the literature. According to the literature values, the spacing between the peaks is expected to be 3.41 nm. However, using the spacing between the local maxima of each peak, a spacing of 3.3 nm was measured with the NLIS, resulting in an error of 3.23%, which is within our error of 0.65 nm due to the granularity of samples. Additionally, a spacing of 3.4 nm was measured with the ocean optics spectrometer, resulting in an error of 0.29%. It is also useful to examine the performance of the sensor in relative radiometric accuracy. Per Ref. 28, the relative peak intensity, defined as the ratio of the 770 nm line intensity, to the 766 nm line intensity is 0.96. As shown in Fig. 17, the NLIS sensor measures a ratio of 0.83, and the Ocean Optics spectrometer measures a ratio of 0.67, resulting in percent errors of 13.5% and 30.2%, respectively. 6.ConclusionsA narrow band, high spectral resolution, imaging Fourier transform spectrometer capable of narrow line discrimination has been developed. The prototype system demonstrates a spatial resolution of with a spectral resolution of . Additionally, in system calibration, an error of 3.23% in the peak separation of the two narrow spectral lines was demonstrated. Lastly, similar to past work, it was shown that neural networks provide superior performance in calibration. Specifically, neural networks provided a greater than 99% reduction in crosstalk in single target detection techniques. AcknowledgmentsThis work acknowledges support from SA Photonics Inc. under Air Force Research Laboratory (AFRL), United States Air Force Contract Number: FA8650-13-C-1589. NCSU Released by AFRL/RYMT for public distribution, <3/3/16>, v2. Additionally, LB and ME acknowledge the partial support of the National Science Foundation (CAREER award ECCS-0955127) in this work. ReferencesM. Eismann, Hyperspectral Remote Sensing, PM210 SPIE Press, Bellingham, Washington
(2012). Google Scholar
J. M. Bioucas-Dias et al.,
“Hyperspectral remote sensing data analysis and future challenges,”
IEEE Geosci. Remote Sens. Mag., 1
(2), 6
–36
(2013). http://dx.doi.org/10.1109/MGRS.2013.2244672 Google Scholar
G. A. Blackburn,
“Hyperspectral remote sensing of plant pigments,”
J. Exp. Bot., 58 855
–867
(2007). http://dx.doi.org/10.1093/jxb/erl123 JEBOA6 1460-2431 Google Scholar
N. Hagen and M. W. Kudenov,
“Review of snapshot spectral imaging technologies,”
Opt. Eng., 52 090901
(2013). http://dx.doi.org/10.1117/1.OE.52.9.090901 Google Scholar
L. Gao and L. V. Wang,
“A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,”
Phys. Rep., 616 1
–37
(2016). http://dx.doi.org/10.1016/j.physrep.2015.12.004 PRPLCM 0370-1573 Google Scholar
B. D. Maione et al.,
“Spatially heterodyned snapshot imaging spectrometer,”
Appl. Opt., 55 8667
–8675
(2016). http://dx.doi.org/10.1364/AO.55.008667 APOPAI 0003-6935 Google Scholar
M. W. Kudenov et al.,
“Polarization spatial heterodyne interferometer: model and calibration,”
Opt. Eng., 53 044104
(2014). http://dx.doi.org/10.1117/1.OE.53.4.044104 Google Scholar
S. S. Vogt et al.,
“HIRES: the high-resolution echelle spectrometer on the Keck 10-m telescope,”
Proc. SPIE, 2198 362
–375
(1994). http://dx.doi.org/10.1117/12.176725 PSISDG 0277-786X Google Scholar
Y. Ji et al.,
“Analytical design and implementation of an imaging spectrometer,”
Appl. Opt., 54 517
(2015). http://dx.doi.org/10.1364/AO.54.000517 APOPAI 0003-6935 Google Scholar
D. Malacara, Optical Shop Testing, John Wiley & Sons, Hoboken, New Jersey
(2007). Google Scholar
J. Li, J. Zhu and X. Hou,
“Field-compensated birefringent Fourier transform spectrometer,”
Opt. Commun., 284 1127
–1131
(2011). http://dx.doi.org/10.1016/j.optcom.2010.11.029 OPCOB8 0030-4018 Google Scholar
M. Françon and S. Mallick, Polarization Interferometers: Applications in Microscopy and Macroscopy, Wiley-Interscience, Hoboken, New Jersey
(1971). Google Scholar
C. Zhang et al.,
“A static polarization imaging spectrometer based on a Savart polariscope,”
Opt. Commun., 203 21
–26
(2002). http://dx.doi.org/10.1016/S0030-4018(01)01726-6 OPCOB8 0030-4018 Google Scholar
C. Zhang et al.,
“Birefringent laterally sheared beam splitter-Savart polariscope,”
Proc. SPIE, 6150 615001
(2006). http://dx.doi.org/10.1117/12.677951 PSISDG 0277-786X Google Scholar
J. Kim et al.,
“Fabrication of ideal geometric-phase holograms with arbitrary wavefronts,”
Optica, 2 958
(2015). http://dx.doi.org/10.1364/OPTICA.2.000958 Google Scholar
J. Harlander, R. J. Reynolds and F. L. Roesler,
“Spatial heterodyne spectroscopy for the exploration of diffuse interstellar emission lines at far-ultraviolet wavelengths,”
Astrophys. J., 396 730
–740
(1992). http://dx.doi.org/10.1086/171756 Google Scholar
M. W. Kudenov et al.,
“Spatial heterodyne interferometry with polarization gratings,”
Opt. Lett., 37 4413
(2012). http://dx.doi.org/10.1364/OL.37.004413 OPLEDP 0146-9592 Google Scholar
D. Luo and M. W. Kudenov,
“Neural network calibration of a snapshot birefringent Fourier transform spectrometer with periodic phase errors,”
Opt. Express, 24 11266
(2016). http://dx.doi.org/10.1364/OE.24.011266 OPEXFF 1094-4087 Google Scholar
M. Richartz,
“An improvement of Savart’s polariscope,”
J. Opt. Soc. Am., 38 623
(1948). http://dx.doi.org/10.1364/JOSA.38.000623 JOSAAH 0030-3941 Google Scholar
M. W. Kudenov and E. L. Dereniak,
“Compact real-time birefringent imaging spectrometer,”
Opt. Express, 20 17973
(2012). http://dx.doi.org/10.1364/OE.20.017973 OPEXFF 1094-4087 Google Scholar
N. F. Borrelli et al.,
“Imaging and radiometric properties of microlens arrays,”
Appl. Opt., 30 3633
(1991). http://dx.doi.org/10.1364/AO.30.003633 APOPAI 0003-6935 Google Scholar
C. Oh et al.,
“High-throughput continuous beam steering using rotating polarization gratings,”
IEEE Photonics Technol. Lett., 22 200
–202
(2010). http://dx.doi.org/10.1109/LPT.2009.2037155 IPTLEL 1041-1135 Google Scholar
G. Ghosh,
“Dispersion-equation coefficients for the refractive index and birefringence of calcite and quartz crystals,”
Opt. Commun., 163 95
–102
(1999). http://dx.doi.org/10.1016/S0030-4018(99)00091-7 OPCOB8 0030-4018 Google Scholar
M. A. Davenport et al.,
“The smashed filter for compressive classification and target recognition,”
Proc. SPIE, 6498 64980H
(2007). http://dx.doi.org/10.1117/12.714460 PSISDG 0277-786X Google Scholar
C.-I. ChangC.-I. Chang, Hyperspectral Data Processing: Algorithm Design and Analysis, Wiley-Interscience, Hoboken, New Jersey
(2013). Google Scholar
J. S. Tyo, E. N. Pugh and N. Engheta,
“Colorimetric representations for use with polarization-difference imaging of objects in scattering media,”
J. Opt. Soc. Am. A, 15 367
(1998). http://dx.doi.org/10.1364/JOSAA.15.000367 JOAOD6 0740-3232 Google Scholar
S. Chaudhuri and K. Kotwal, Hyperspectral Image Fusion, Springer, New York
(2013). Google Scholar
S. Falke et al.,
“Transition frequencies of the D lines of K39, K40, and K41 measured with a femtosecond laser frequency comb,”
Phys. Rev. A, 74 32503
(2006). http://dx.doi.org/10.1103/PhysRevA.74.032503 Google Scholar
BiographyBryan Maione completed his BS degree in electrical engineering at the University at Buffalo in 2013. Following his undergraduate studies, he moved to North Carolina to attend NCSU and work under Michael Kudenov, where he earned his PhD in electrical engineering. He now works for Aqueti in Durham, North Carolina, USA, developing array cameras. His primary research interests include hyperspectral imaging, computational imaging, and machine learning. Leandra Brickson is a PhD student in electrical engineering with a focus in photonics and nanofabrication. After some initial work in nanofabrication, she worked at the GPL group under Dr. Michael Escuti at North Carolina State University (NCSU) fabricating and designing liquid crystal polarization gratings and apodization phase plates. Research interests include polymerization mechanisms, light transport modeling, and machine learning using deep learning. She is currently a PhD student at Stanford University. Michael Escuti is currently a professor of electrical engineering at NCSU, Raleigh, North Carolina, where he pursues research topics in photonics, optoelectronics, diffractive optics, and liquid crystals. He has been recognized by the 2016 NCSU Innovator of the Year Award, the 2010 Presidential Early Career Award for Scientists and Engineers (NSF), and both the Glenn H. Brown Award (2004) from the International Liquid Crystal Society and the OSA/NewFocus StudentAward (2002) from the Optical Society of America. Michael Kudenov received his BS degree in electrical and computer engineering from University of Alaska Fairbanks, Fairbanks, Alaska, in 2005 and his PhD degree in optical sciences from the University of Arizona, Tucson, Arizona, in 2009. He is an assistant professor of ECE at NCSU in Raleigh, North Carolina. His lab researches compact high-speed hyperspectral, polarimetric, and interferometric sensors and sensing systems within multidisciplinary applications spanning remote sensing, defense, process monitoring, and biological imaging. |