With the increasing demand for autonomous vehicles, the reliable detection and recognition of objects have become paramount. Millimeter-wave radar systems operating at frequencies such as 94GHz offer several advantages, including high resolution, precise range measurement, and resilience to adverse weather conditions. However, accurately characterizing the RCS of objects at this frequency range is essential to optimize the performance of these radar systems. In this work, we present Radar Cross Section (RCS) measurements of various objects commonly encountered on the road such as the metal balls, the bricks. The acquired findings illuminate the radar signal fluctuations demonstrated by different entities at a frequency of 94GHz. This knowledge serves as a foundation for developing robust algorithms and signal processing techniques that enhance the object detection, classification, and tracking capabilities of radar-based autonomous driving systems. Moreover, our research provides a valuable dataset of RCS measurements for different objects at 94GHz, which can serve as a benchmark for future studies in the field of radar system design and testing. The availability of such a dataset facilitates the development and validation of radar system models, the evaluation of sensor fusion approaches, and the comparison of performance across different radar hardware and signal processing algorithms. The results contribute to the advancement of radar-based autonomous driving systems by providing crucial insights into the scattering characteristics of objects at millimeter-wave frequencies. The findings have practical implications for optimizing the design, testing, and performance of radar systems, ultimately enhancing the safety and efficiency of autonomous vehicles in real-world scenarios.
This work proposed a non-calibrated free-space materials characterization by vector network analyzer (VNA) and spot focusing lens. Compared with traditional quasi-optical measurements, it has no use of high gain antennas. A smaller spot of terahertz wave at the focal point means that smaller samples can be characterized. In measurements, time-domain transformation of S21 is carried out. Locations of the maximum value correspond to the fastest transmission path. Time-domain gating is applied to isolate the source mismatch error and multi-path effect. After that, samples were placed at the focal point, and frequency domain results of sample signals were recorded and calculated. Phase ambiguity is solved by linear fitting and extrapolation. The method we proposed can extract electromagnetic parameters of materials with no limitations on frequency. Results have higher precision than those applied Gate-reflect-line calibration method, which is considered the most accurate one.
Complex-valued convolutional neural networks (CVCNN) have better performance than real-valued neural networks in the field of terahertz imaging. In this paper, a complex-valued neural network is innovatively applied to terahertz image classification task in a vector network analyzer (VNA) imaging system. The complex-valued CNN (CVCNN) processing framework for terahertz image classification is proposed. Terahertz image datasets are constructed using MINIST handwritten datasets and PSF which was measured from our transmission system. Compared to CNN, CVCNN has a better accuracy rate, and it is significantly less vulnerable to over-fitting. Phase information can be used well at the same time, which is impossible for the CNN. The method of training data generation is given, and some specific implementation details are given. the superiority of the method in this paper is verified by using simulated and measured data obtained from 200Ghz image system.
This paper proposes a study of underwater noise in Siutghiol lake from Constanta. This study is useful for the signal to noise ratio maximization for better underwater targets detection. Studying the properties of the underwater noise we want to conclude about noise stationery, in large or even in the restraint sense. It is very useful to study the aspect of the stochastic parameters – mean value, standard deviation and correlation coefficients. The relative invariance of those parameters denotes a kind of noise stationary in wide sense. If the density probability function of the underwater noise is time invariant this one can be considered a restraint (strict) sense stationary one. The noise characterization is useful to optimize the underwater targets detection by SONAR systems, using classic matched filters or time-frequency matched filters.
KEYWORDS: Cameras, Detection and tracking algorithms, Imaging systems, 3D modeling, Optical tracking, Databases, Systems modeling, Performance modeling, Optical engineering, RGB color model
We present an approach for real-time camera tracking with depth stream. Existing methods are prone to drift in sceneries without sufficient geometric information. First, we propose a new weight method for an iterative closest point algorithm commonly used in real-time dense mapping and tracking systems. By detecting uncertainty in pose and increasing weight of points that constrain unstable transformations, our system achieves accurate and robust trajectory estimation results. Our pipeline can be fully parallelized with GPU and incorporated into the current real-time depth camera tracking system seamlessly. Second, we compare the state-of-the-art weight algorithms and propose a weight degradation algorithm according to the measurement characteristics of a consumer depth camera. Third, we use Nvidia Kepler Shuffle instructions during warp and block reduction to improve the efficiency of our system. Results on the public TUM RGB-D database benchmark demonstrate that our camera tracking system achieves state-of-the-art results both in accuracy and efficiency.
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.