The state-of-the-art position of cameras in forward-facing Advanced Driver Assistance Systems (ADAS) is behind the windshield, integrated within the rearview mirror holder. In this position, the quality of the windshield as an optical element directly impacts the quality of the camera image. With increasing camera resolution and narrow field of view optics required for large object detection distances, the optical impact of the windshield becomes increasingly important. We suggest a method based on computer graphics for evaluating the optical performance of windshields in front of ADAS cameras. Using a ray tracing framework, we produce quantitative simulations of the light transport through the windshield. To represent the geometry of the windshield, we fit ellipsoid models to measurements of its inner and outer surfaces produced using a chromatic white light sensor in a coordinate measuring machine. The ellipsoid fits enable accurate ray intersections with the windshield even for cameras positioned close to the windshield surface. Additionally, we investigate the windshield microgeometry using optical profilometry and find that the microstructure is smaller than 200 nm. Thus, the microgeometry can only cause a very slight diffractive blur, and we consider the ellipsoidal macrogeometry sufficient for evaluating the light transport. In simulation experiments, we evaluate the impact of the windshield on a forward-facing ADAS camera by computing the modulation transfer function degradation of the camera image. In our experiments, we vary camera aperture and resolution as well as distance and angle of the windshield to the camera. To validate our results, we reconstruct the angle variation experiment in the lab.
The Sunrise observatory consists of a one-meter solar telescope operated in the gondola of a stratospheric balloon. The first two science flights of Sunrise have shown the unreached imaging quality at lower costs than satellitebased missions, as well as a general problem of balloon missions: Micro-vibrations have occurred during parts of the observation time and made the determination of the point spread function difficult. This paper introduces an adaption of deconvolution from wave-front sensing (DWFS) as a possible solution. The case of vibrations in the common path is verified in simulations. The utilization of high-cadence spectro-polarimeters is approached in order to extend DWFS to non-common path errors at the scientific camera.
While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.
A new quasilinear algorithm for solving the crosscoupling power optimal wire spacing problem is developed.
Contrasted to state of the art solutions, the proposed method not only guarantees optimality of the solution,
but also achieves improvements of more than five orders of magnitude in runtime. In addition, the algorithm is
modified to river-route the wire endings to their initial positions, allowing it to optimize the wire topology of
entire detail-routed standard cell circuits. Extensive replicable experiments assess the effectivity of the methods
for a wide range of real-world circuit examples of which the wire switching power is reduced locally by up to 50%
and chip-wide by up to 8.3%.
KEYWORDS: Video, Video processing, Video coding, Very large scale integration, Computer architecture, Clocks, Multimedia, Quantization, Computer simulations, Motion estimation
A VLSI architecture with flexible, application-specific coprocessors for object based video encoding/decoding is presented. The architecture consists of a standard embedded RISC core, as well as coprocessor modules for macroblock algorithms, motion estimation and bitstream processing. Bitstream decoding involves strong data dependencies, which requires optimized logical partitioning. An optimized instruction set can speed up bitstream decoding by a factor of two. This architecture combines high performance of dedicated ASIC architectures with the flexibility of programmable processors. Dataflow and memory access were optimized based on extensive studies of statistical complexity variations. Results on gate count and clock rate, required for realtime processing of MPEG-4 Core Profile video, are presented, as well as a comparison with software implementations on a standard RISC architecture.
This paper discusses VLSI architectural support for motion estimation (ME) algorithms within the H.263 and MPEG-4 video coding standards under low power constraints. A high memory access bandwidth and a high number of memory modules is mainly responsible for high power consumption in various motion estimation architectures. Therefore the aim of the presented VLSI architecture was to gain high efficiency at low memory bandwidth requirements for the computationally demanding algorithms as well as the support of several motion estimation algorithmic features with less additional area overhead. The presented VLSI architecture supports besides full search ME with [-16, 15] and [-8, +7] pel search area, MPEG-4 ME for arbitrarily shaped objects, advanced prediction mode, 2:1 pel subsampling, 4:1 pel subsampling, 4:1 alternate pel subsampling, Three Step Search (TSS), preference of the zero-MV, R/D-optimized ME and half-pel ME. A special data-flow design is used within the proposed architecture which allows to perform up to 16 absolute difference calculations in parallel, while loading only up to 2 bytes in parallel from current block and search are memory per clock cycle each. This VLSI-architecture was implemented using a VHDL-synthesis approach and resulted into a size of 22.8 kgates (without RAM), 100 Mhz (min.) using a 0.25 micrometer commercial CMOS library.
KEYWORDS: Motion estimation, Video, Video coding, Motion analysis, Chlorine, Very large scale integration, Video compression, Computer programming, Visualization, Statistical analysis
A complexity and visual quality analysis of several fast motion estimation (ME) algorithms for the emerging MPEG-4 standard was performed as a basis for HW/SW partitioning for VLSI implementation of a portable multimedia terminal. While the computational complexity for the ME of previously standardized video coding schemes was predictable over time, the support of arbitrarily shaped visual objects (VO), various coding options within MPEG-4 as well as content dependent complexity (caused e.g. by summation truncation for SAD) introduce now content (and therefore time) dependent computational requirements, which can't be determined analytically. Therefore a new time dependent complexity analysis method, based on statistical analysis of memory access bandwidth, arithmetic and control instruction counts utilized by a real processor, was developed and applied. Fast ME algorithms can be classified into search area subsampling, pel decimation, feature matching, adaptive hierarchical ME and simplified distance criteria. Several specific implementations of algorithms belonging to these classes are compared in terms of complexity and PSNR to ME algorithms for arbitrarily and rectangular shaped VOs. It is shown that the average macroblock (MB) computational complexity per arbitrary shaped P-VOP (video object plane) depicts a significant variation over time for the different motion estimation algorithms. These results indicate that theoretical estimations and the number of MBs per VOP are of limited applicability as approximation for computational complexity over time, which is required e.g. for average system load specification (in contrast to worst case specification), for real-time processor task scheduling, and for Quality of Service guarantees of several VOs.
A complexity analysis of the video part of the emerging ISO/IEC MPEG-4 standard was performed as a basis for HW/SW partitioning for VLSI implementation of a portable MPEG-4 terminal. While the computational complexity of previously standardized video coding schemes was predictable for I-, P- and B-frames over time, the support of arbitrarily shaped visual objects as well as various coding options within MPEG-4 introduce now content dependent computational requirements with significant variance. In this paper the result of a time dependent complexity analysis of the encoding and decoding process of a binary shape coded video object (VO) and the comparison with a rectangular shaped VO is given for the complete codec as well as for the single tools of the encoding and decoding process. It is shown, that the average MB complexity per arbitrary shaped P-VOP depicts significant variation over time for the encoder and minor variations for the decoder.
KEYWORDS: Very large scale integration, Motion estimation, Distortion, Clocks, Video, Video compression, Standards development, Video coding, CMOS technology, Algorithm development
This paper describes the architecture and application of a flexible 100 GOPS (giga operations per second) exhaustive search segment matching VLSI architecture to support evolving motion estimation algorithms as well as block matching algorithms of established video coding standards. The architecture is based on a 32 by 32 processor element (PE) array and a 10240 byte on-chip search area RAM and allows concurrent calculation of motion vectors for 32 by 32, 16 by 16, 8 by 8 and 4 by 4 blocks and partial quadtrees (called segments) for a plus or minus 32 pel search range with 100% PE utilization. This architecture supports object based algorithms by excluding pixels outside of video objects from the segment matching process as well as advanced algorithms like variable block-size segment matching with luminance correction. The VLSI has been designed using VHDL synthesis and a 0.35 micrometer CMOS technology and will have a clock rate of 100 Mhz (min.) allowing the processing of 23668 32 by 32 blocks per second with a maximum of plus or minus 32 pel search area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.