PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We have designed and fabricated a 176×144-pixels (QCIF) CMOS image sensor for on-chip bio-fluorescence imaging of the mouse brain. In our approach, a single CMOS image sensor chip without additional optics is used. This enables imaging at arbitrary depths into the brain; a clear advantage compared to existing optical microscopy methods. Packaging of the chip represents a challenge for in vivo imaging. We developed a novel packaging process whereby an excitation filter is applied onto the sensor. This eliminates the use of a filter cube found in conventional fluorescence microscopes. The fully packaged chip is about 350 μm thick. Using the device, we demonstrated in vitro on-chip fluorescence imaging of a 400 μm thick mouse brain slice detailing the hippocampus. The image obtained compares favorably to the image captured by conventional microscopes in terms of image resolution. In order to study imaging in vivo, we also developed a phantom media. In situ fluorophore measurement shows that detection through the turbid medium of up to 1 mm thickness is possible. We have successfully demonstrated imaging deep into the hippocampal region of the mouse brain where quantitative fluorometric measurements was made. This work is expected to lead to a promising new tool for imaging the brain in vivo.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have fabricated a CMOS image sensor which can simultaneously capture optical and on-chip potential images. The target applications of the sensor are; 1) on-chip DNA (and other biomolecular) sensing
and 2) on-chip neural cell imaging. The sensor was fabricated using a 0.35μm 2-poly, 4-metals standard CMOS process. The sensor has a pixel array that consists of alternatively aligned optical sensing pixels (88×144) and potential sensing pixels (88×144). The total size of the array is QCIF (176×144). The size of the pixel is 7.5μm×7.5μm. The potential sensing pixel has a sensing electrode which capacitively couples with the measurement target on the sensor. It can be operated either in a wide-range (over 5V) mode or in a high-sensitivity (1.6mV/LSB) mode. Two-dimensional optical and potential imaging function was also demonstrated. Probes with gel tips were placed on the sensor surface and potential was applied. A potential spot with diameter smaller than 50μm was successfully observed in the dual imaging operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mark L. Prydderch, Marcus J. French, Keith Mathieson, Christopher Adams, Deborah Gunning, Jonathan Laudanski, James D. Morrison, Alan R. Moodie, James Sinclair
Proceedings Volume Sensors, Cameras, and Systems for Scientific/Industrial Applications VII, 606803 (2006) https://doi.org/10.1117/12.641165
Degenerative photoreceptor diseases, such as age-related macular degeneration and retinitis pigmentosa, are the most common causes of blindness in the western world. A potential cure is to use a microelectronic retinal prosthesis to provide electrical stimulation to the remaining healthy retinal cells. We describe a prototype CMOS Active Pixel Sensor capable of detecting a visual scene and translating it into a train of electrical pulses for stimulation of the retina. The sensor consists of a 10 x 10 array of 100 micron square pixels fabricated on a 0.35 micron CMOS process. Light incident upon each pixel is converted into output current pulse trains with a frequency related to the light intensity. These outputs are connected to a biocompatible microelectrode array for contact to the retinal cells. The flexible design allows experimentation with signal amplitudes and frequencies in order to determine the most appropriate stimulus for the retina. Neural processing in the retina can be studied by using the sensor in conjunction with a Field Programmable Gate Array (FPGA) programmed to behave as a neural network. The sensor has been integrated into a test system designed for studying retinal response. We present the most recent results obtained from this sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the sizes of imaging arrays become larger both in pixel count and area, the possibility of pixel defects increases during manufacturing and packaging, and over the lifetime of the sensor. To correct for these possible pixel defects, a Fault Tolerant Active Pixel Sensor (FTAPS) with redundancy at the pixel level has been designed and fabricated with only a small cost in area. The noise of the standard Active Pixel Sensor (APS) and FTAPS, under normal operating conditions as well as under the presence of optically stuck high and low faults, is analyzed and compared. The analysis shows that under typical illumination conditions the total noise of both the standard APS and FTAPS is dominated by the photocurrent shot noise. In the worst case (no illumination) the total mean squared noise of the FTAPS is only 15.5% larger than for the standard APS, while under typical illumination conditions the FTAPS noise increases by less than 0.1%. In the presence of half stuck faults the noise of the FTAPS compared to the standard APS stays the same as for the FTAPS without defects. However, simulation and experimental results have shown that the FTAPS sensitivity is greater than two times that of the standard APS leading to an increased SNR by more than twice for the FTAPS with no defects. Moreover, the SNR of a faulty standard APS is zero whereas the SNR of the FTAPS is reduced by less than half.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Poisson and Normal probability distributions poorly match the dark current histogram of a typical image sensor. The histogram has only positive values, and is positively skewed (with a long tail). The Normal distribution is symmetric (and possesses negative values), while the Poisson distribution is discrete. Image sensor characterization and simulation would benefit from a different distribution function, which matches the experimental observations better. Dark current fixed pattern noise is caused by discrete randomly-distributed charge generation centers. If these centers
shared a common charge-generation rate, and were distributed uniformly, the Poisson distribution would result. The fact that it does not indicates that the generation rates vary, a spatially non-uniform amplification is applied to the centers, or that the spatial distribution of centers is non-uniform. Monte Carlo simulations have been used to examine these hypotheses. The Log-Normal, Gamma and Inverse Gamma distributions have been evaluated as empirical models for characterization and simulation. These models can accurately match the histograms of specific image sensors. They can also be used to synthesize the dark current images required in the development of image processing algorithms. Simulation methods can be used to create synthetic images with more complicated distributions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On-die optics are an attractive way of reducing package size for imaging and non-imaging optical sensors. While systems incorporating on-die optics have been built for imaging and spectral analysis applications, these have required specialized fabrication processes and additional off-die components. This paper discusses the fabrication of an image sensor with neither of these limitations. Through careful design, an image sensor is implemented that uses on-die diffractive optics fabricated using a standard 0.18 micron bulk CMOS process, with simulations indicating that the resulting die is capable of acting as a standalone imaging system resolving spatial features to within ±0.15 radian and spectral features to within ±40 nm wavelength accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past years, it was expected that high-quality scientific imaging would remain out of reach for CMOS sensors since they lacked the required performance in noise, non-uniformity and dark signal levels and hence could not compete with high performances CCDs offering high quantum efficiency, large dynamic range and special modes as Time Delay Integration (TDI) and binning. However, CMOS imaging performs better than expected and allows today to address
applications requiring: TDI capability coupled to image acquisition in pushbroom mode in order to enhance radiometric performances, Very long linear arrays thanks to stitching techniques, A high level of on-chip integration with both panchromatic TDI and multispectral linear sensors, On-chip correlated double sampling for low noise operation. This paper presents the design of a CMOS linear array, resulting from collaboration between Alcatel Alenia Space and Cypress Semiconductor, which take advantage of each of these emerging potentialities for CMOS technologies. It has 8000 panchromatic pixels with up to 25 rows used in TDI mode, and 4 lines of 2000 pixels for multispectral imaging. Main system requirements and detector trade-offs are presented, and test results obtained with a first generation prototype are summarized and compared with predicted performances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have adapted Cu interconnect layers to realize a high sensitivity in a small-pixel CMOS image sensor with a pixel size of 2.5 × 2.5 μm. We used the 1P3M CMOS process, and applied Back End of Line (BEOL) with a design rule equivalent to the 90-nm process. The Cu process features a fill factor that is about 15% greater and an interconnect layer height about 40% less than those of the Al process. As a result, the sensitivity at F5.6 is about 5% greater, while that at F1.2 is about 30% greater. One of problems with the Cu process is the stopper film of the Cu that interferes with the light. Furthermore, this stopper film interacts with the SiO2 layers to form a multilayer, which leads to a discontinuity in the reflection characteristics at some wavelengths (ripple). Our method involves removing the stopper films together with all the layers. We have also adopted the use of an inner lens. Using these methods, we were able to eliminate the problem of discontinuity in the reflection at some wavelengths. Another problem is the deterioration in the shading characteristics of the optical black area where the black standard is assumed, due to the fact that the Cu interconnect layer is much thinner than the Al interconnect layer. We confirmed that the optical black shading characteristic could be satisfied by using a color filter with the Cu interconnect layer. To realize a 2.5μm pixel CMOS image sensor, we developed the Cu interconnect process. As a result, we created a high-sensitivity CMOS image sensor. This technology should enable the further reduction of the pixel size to less than 2.5 μm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An ultra wide dynamic range (WDR) CMOS image sensor (CIS) and the details of evaluation are presented. The proposed signal readout technique of extremely short accumulation (ESA) enables the dynamic range of image sensor to be expanded up to 146dB. Including the ESA signals, total of 4 different accumulation time signals are read out in one frame period based on burst readout technique. To achieve the high-speed signal readout required for the multiple exposure signals, column parallel A/D converters are integrated at the upper and lower sides of pixel arrays. The improved 12-bits cyclic ADCs with a built-in correlated double sampling (CDS) circuit has the differential non-linearity (DNL) of ±0.3LSB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report a low-voltage digital vision chip based on a pulse-frequency-modulation (PFM) photosensor using capacitive feedback reset and pulse-domain digital image processing to explore its feasibility of low power consumption and high dynamic range even at a low power-supply voltage. An example of the applications of the vision chip is retinal prosthesis, in which supplied power is limited. The pixel is composed of a PFM photosensor with a dynamic pulse memory, pulse gates, and a 1-bit digital image processor. The binary value stored at the dynamic pulse memory is read to the 1-bit digital image processor. The image processor executes spatial filtering by mutual operations between the pulses from the pixel and those from the four neighboring pixels. The weights in image processing are controlled by pulse gates. We fabricated a test chip in a standard 0.35-μm CMOS technology. Pixel size and pixel counts were 100 μm sq. and 32 x 32, respectively. In the experiments, four neighboring pixels were considered in image processing. The test chip successfully operated at low power supply voltage around 1.25 V. The frame rate was 26 kfps. Low-pass filtering, edge enhancement, and edge detection have been demonstrated. Relationships between power supply voltages and characteristics of the vision chip are investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geiger Mode avalanche photodiodes offer single photon detection, however, conventional biasing and processing circuitry make arrays impractical to implement. A novel photon counting concept is proposed which greatly simplifies the circuitry required for each device, giving the potential for large, single photon sensitive, imaging arrays. This is known as the DigitalAPD. The DigitalAPD treats each device as a capacitor. During a write, the capacitor is periodically charged to photon counting mode and then left open circuit. The arrival of photons causes the charge to be lost and this is later detected during a read phase. Arrays of these devices have been successfully fabricated and a read out architecture, employing well known memory addressing and scanning techniques to achieve fast frame rates with a minimum of circuitry, has been developed. A discrete prototype has been built to demonstrate the DigitalAPD with a 4x4 array. Line rates of up to 5MHz have been observed using discrete electronics. The frame burst can be transferred to a computer where the arrival of single photons at any of the 16 locations can be examined, frame by frame. The DigitalAPD concept is highly scalable and is soon to be extended to a fully integrated implementation for use with larger 32x32 and 100x100 APD arrays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional imaging sensors for computer vision, such as CCD and CMOS arrays, have well-known limitations with regard to detecting objects that are very small in size (that is, a small object image compared to the pixel size), are viewed in a low contrast situation, are moving very fast (with respect to the sensor integration time), or are moving very small distances compared to the sensor pixel spacing. Any one or a combination of these situations can foil a traditional CCD or CMOS sensor array. Alternative sensor designs derived from biological vision systems promise better resolution and object detection in situations such as these. The patent-pending biomimetic vision sensor based on Musca domestica (the common house fly) is capable of reliable object rendition in spite of challenging movement and low contrast conditions. We discuss some interesting early results of comparing the biomimetic sensor to commercial CCD sensors in terms of contrast and motion sensitivity in situations such as those listed above.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Instrumentation was developed in 2004 and 2005 to measure the quantum efficiency of the Lawrence Berkeley National Lab (LBNL) total-depletion CCD's, intended for astronomy and space applications. This paper describes the basic instrument. Although it is conventional even to the parts list, there are important innovations. A xenon arc light source was chosen for its high blue/UV and low red/IR output as compared with a tungsten light. Intensity stabilization has been difficult, but since only flux ratios matter this is not critical. Between the light source and an Oriel MS257 monochromator are a shutter and two filter wheels. High-bandpass and low-bandpass filter pairs isolate the 150-nm wide bands appropriate to the wavelength, thus minimizing scattered light and providing order blocking. Light from the auxiliary port enters a 20-inch optical sphere, and the 4-inch output port is at right angles to the input port. An 80 cm drift space produces near-uniform illumination on the CCD. Next to the cold CCD inside the horizontal dewar is a calibrated reference photodiode which is regulated to the PD calibration temperature, 25° C. The ratio of the CCD and in-dewar reference PD signals provides the QE measurement. Additional cross-calibration to a PD on the integrating sphere permits lower-intensity exposures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The usual QE measurement heavily relies on a calibrated photodiode (PD) and the knowledge of the CCD's gain. Either can introduce significant systematic errors. But 1-R ≥QE, where R is the reflectivity. Over a significant wavelength range, 1-R = QE. An unconventional reflectometer has been developed to make this measurement. R is measured in two steps, using light from the lateral monochromator port via an optical fiber. The beam intensity is measured directly with a PD, then both the PD and CCD are moved so that the optical path length is unchanged and the light reflects once from the CCD; the PD current ratio is R. Unlike the traditional VW scheme this approach makes only one reflection from the CCD surface. Since the reflectivity of the LBNL CCDs might be as low as 2% this increases the signal to noise ratio dramatically. The goal is a 1% accuracy. We obtain good agreement between 1 - R and the direct QE results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A 28-M pixel, full-frame CCD imager with 7.2×7.2 μm2 pixel size and Bayer RGB color pattern was developed for use in professional applications. As unique option a RGB compatible binning feature was designed into this sensor. This gives the possibility to exchange resolution for sensitivity, read-out speed and signal-to-noise ratio. This paper presents the device architecture, RGB binning principle and evaluation results of the overall sensor performance. The performed device simulations and the evaluation results of the RGB binning feature are described in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent discoveries show new promise for a formerly assumed extinct technology, CCDs. A primary limitation to the implementation of new ground-based astronomy measurement techniques is the inaccuracy of navigation and targeting due to error in the celestial frame of reference. This celestial frame of reference is relied upon for satellite attitude determination, payload calibration, in-course missile adjustments, space surveillance, and accurate star positions used as fiducial points. STA will describe the development of an ultrahigh resolution CCD (up to the maximum limit of a 150 mm wafer) that integrates high dynamic range and fast readout that will substantially decrease the error in the celestial reference frame. STA will also discuss prior and ongoing experience with large area CCD focal-plane arrays which include innovative design and fabrication techniques that ensure performance and yield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The orthogonal-transfer array (OTA) is a new charge-coupled device (CCD) concept for wide-field imaging in groundbased astronomy based on the orthogonal-transfer CCD (OTCCD). This device combines an 8×8 array of small OTCCDs, each about 600×600 pixels with on-chip logic to provide independent control and readout of each CCD. The device provides spatially varying electronic tip-tilt correction for wavefront aberrations, as well as compensation for telescope shake. Tests of prototype devices have verified correct functioning of the control logic and demonstrated good CCD charge-transfer efficiency and high quantum efficiency. Independent biasing of the substrate down to -40 V has enabled fully depleted operation of 75-μm-thick devices with good charge PSF. Spurious charge or "glow" due to impact ionization from high fields at the drains of some of the NMOS logic FETs has been observed, and reprocessing of some devices from the first lot has resolved this issue. Read noise levels have been 10 - 20 e-, higher than our goal of 5 e-, but we have identified the likely sources of the problem. A second design is currently in fabrication and uses a 10-μm pixel design resulting in a 22.6-Mpixel device measuring 50×50 mm. These devices will be deployed in the U. of Hawaii Pan-STARRS focal plane, which will comprise 60 OTAs with a total of nearly 1.4 Gpixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we explore potential applications of a new
transparent Electro-Optic Ceramic in 3D imaging as a fast phase
shifter and demonstrate its performance in a newly developed Low
Coherence Polarization Interference Microscopy (LCPIM). The new phase modulator is fast, convenient and inexpensive. It makes the 3D
imaging system that employs it mechanically efficient and compact.
The LCPIM proposed in this paper has the advantages of rapid phase
shifting and adjustable reference/objective intensity ratio for
maximum contrast. The new phase modulator has been approved feasible
by several experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The International Civil Aviation Organization (ICAO) is the regulatory body for Airports. ICAO standards dictate that luminaires used within an airport landing lighting pattern must have a color as defined within the 1931 color chart defined by the Commission Internationale De L'Eclairage (CIE). Currently, visual checks are used to ensure luminaires are operating at the right color within the pattern. That is, during an approach to an airport the pilot must
visually check each luminaire within the landing pattern. These visual tests are combined with on the spot meter reading tests. This method is not accurate and it is impossible to assess the color of every luminaire. This paper presents a novel, automated method for assessing the color of luminaires using low cost single chip CCD
video camera technology. Images taken from a camera placed within an aircraft cockpit during a normal approach to an airport are post-processed to determine the color of each luminaire in the pattern. In essence, the pixel coverage and total RGB level for each luminaire within the pattern must be extracted and tracked throughout the complete image sequence and an average RGB value used to predict the luminaire color. This prediction is based on a novel pixel model which was derived to determine the minimum pixel coverage required to accurately predict the color of an imaged luminaire. Analysis of how many pixels are required for color recognition and position within a CIE color chart is given and proved empirically. From the analysis it is found that, a minimum diameter of four pixels is required for color recognition of the major luminaires types within the airport landing pattern. The number of pixels required for classification of the color is then derived. This is important as the luminaries are far away when imaged and may cover only a few pixels since a large area must be viewed by the camera. The classification is then verified by laboratory based experiments with different luminaire sources. This paper shows that it is possible to accurately predict the color of luminaires using automated image analysis. Whilst this is not a new phenomenon the authors have shown that it is possible to simulate the operation of a single chip CCD imager and illustrate the minimum pixel coverage that is required to accurately represent a colored luminaire from a moving platform. In addition, the color assessment of airport landing lighting has not yet been tackled using photometrics and dynamic cameras. The principles outlined are generic and can therefore be applied to other areas of lighting research such as signal and street lighting. Future color work with respect to airport lighting will concentrate on more complex models for the luminaire movement across a mosaic filter. A current assumption used in this paper is that there are no gaps between the pixel patches, which is not the case. As such an update on the model will incorporate inter-pixel gaps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed color camera for an 8k x 4k-pixel ultrahigh-definition video system, which is called Super Hi- Vision, with a 5x zoom lens and a signal-processing system incorporating a function for real-time lateral chromatic aberration correction. The chromatic aberration of the lens degrades color image resolution. So in order to develop a compact zoom lens consistent with ultrahigh-resolution characteristics, we incorporated a real-time correction function in the signal-processing system. The signal-processing system has eight memory tables to store the correction data at eight focal length points on the blue and red channels. When the focal length data is inputted from the lens control units, the relevant correction data are interpolated from two of eights correction data tables. This system performs geometrical conversion on both channels using this correction data. This paper describes that the correction function can successfully reduce the lateral chromatic aberration, to an amount small enough to ensure the desired image resolution was achieved over the entire range of the lens in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High resolution electron imaging is very important in nanotechnology and biotechnology fields. For example, Cryogenic Electron-Microscopy is a promising method to obtain 3-D structures of large protein complexes and viruses. We report on the design and measurements of a new CMOS direct-detection camera system for electron imaging. The
active pixel sensor array that we report on includes 512 by 550 pixels, each 5 by 5 μm in size, with an ~8 μm epitaxial layer to achieve an effective fill factor of 100%. Spatial resolution of 2.3 μm for a single incident e- has been measured. Electron microscope tests have been performed with 200 and 300 keV beams, and the first recorded Electron Microscope image is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel measurement system that assesses the uniformity of a complete airport lighting installation. The system improves safety with regard to aircraft landing procedures by ensuring airport lighting is properly maintained and conforms to current standards and recommendations laid down by the International Civil Aviation Organisation. The measuring device consists of a CMOS vision sensor with associated lens system fitted to the interior of an aircraft. The vision system is capable of capturing sequences of airport lighting images during a normal approach to an aerodrome. These images are then post processed to determine the uniformity of the complete pattern. Airport lighting consists of elevated approach and inset runway luminaires. Each luminaire emits an intensity which
is dependant on the angular displacement from the luminaire. For example, during a normal approach a given luminaire will emit its maximum intensity down to its minimum intensity as the aircraft approaches and finally passes over the luminaire. As such, it is possible to predict the intensity that each luminaire within the airport lighting pattern emits, at a given time, during a normal approach. Any luminaires emitting the same intensity can then be banded together for the uniformity analysis. Having derived the theoretical groups of similar luminaires within a standard approach, this information was applied to a sequence of airport lighting images that were recorded during an approach to Belfast International Airport. Since we are looking to determine the uniformity of the pattern, only the total pixel grey level representing each luminaire within each banded group needs to be extracted and tracked through the entire image sequence. Any luminaires which fail to meet the requirements (i.e. a threshold value depending on the performance of the other luminaires in that band) are monitored and reported to the assessor for attention. The extraction and tracking algorithms have been optimised for minimal human intervention. Techniques such as component analysis as well as centre of mass algorithms are used to detect and locate the luminaires. A search algorithm is used to obtain the brightness (total grey level) of each luminaire. For the sample test at Belfast International Airport several luminaires were found that do not output sufficient intensity. As a final conclusion however, the Belfast International lighting pattern is legal and conforms to standards as no two consecutive luminaires fail in the pattern. The techniques used in this paper are novel. No known research exists that couples uniformity of airport lighting with photometrics. A solid basis has been established for future work on monitoring the individual characteristics of the luminaires. This includes colour and intensity measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a compact and cost-effective camera module on the basis of wafer-scale-replica processing. A multiple-layered structure of several aspheric lenses in a mobile-phone camera module is first assembled by bonding multiple glass-wafers on which 2-dimensional replica arrays of identical aspheric lenses are UV-embossed, followed by dicing the stacked wafers and packaging them with image sensor chips. This wafer-scale processing leads to at least 95% yield in mass-production, and potentially to a very slim phone with camera-module less than 2 mm in thickness. We have demonstrated a VGA camera module fabricated by the wafer-scale-replica processing with various UV-curable polymers having refractive indices between 1.4 and 1.6, and with three different glass-wafers of which both surfaces are embossed as aspheric lenses having 230 μm sag-height and aspheric-coefficients of lens polynomials up to tenth-order. We have found that precise compensation in material shrinkage of the polymer materials is one of the most technical challenges, in order to achieve a higher resolution in wafer-scaled lenses for mobile-phone camera modules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems since they belong to a new technology product. One of the
main problems is image acquisition. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. A digital camera in the top of the box connected to a PC computer with a USB cable,
all the camera functions are controlled by the computer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an improved automated method for calculating the MTF (Modulation Transfer Function) of an optical system. The paper presents the theoretical background and describes various techniques for calculating the edge response and MTF. An improved method is proposed and it is shown to be more accurate and robust. An additional advantage of the proposed method is that it can be fully automated. The proposed method is valid for various optical imaging systems. The results are compared and conclusions are made regarding the validity of the technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Past and present revelations within scientific imaging have stressed the importance of CCD small signal sensitivity. Current characterization techniques use a Fe55 soft x-ray source to determine charge transfer and noise, however rendering signal sensitivity less than 1620 e- unknown. CCD evolution has brought forth innovative design and fabrication techniques to decrease device noise and increase device sensitivity, enabling low level imaging. This paper presents a simple approach to characterizing the transfer functions, linearity, noise, and output sensitivity at low signal levels, thus confirming the true capabilities of the imager. This characterization technique also validates the quality of the base material and process via low level trap testing. The characterization method uses fluoresce x-rays from target materials to illuminate the imager.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.