Automated navigation of Unmanned Aircraft Systems (UAS) in a broad range of illumination scenarios implies improved and real-time depth estimation and long-distance obstacle detection. We present our lightweight ultra wide-angle camera optimized for low-light illumination (down to < 1 lux) mounted on a drone and compare its optical performance with other module found in the market. We also capture images from the drone in flight and test them on monocular depth estimation neural networks and show that our camera module is suitable for low-light navigation.
The next generation of sUAS (small Unmanned Aircraft Systems) for automated navigation will have to perform in challenging conditions, bad weather, high and low temperature and from dusk-to-dawn. The paper presents experimental results from a new wide-angle vision camera module specially optimized for low-light. We present the optical characteristics of this system as well as experimental results obtained for different sense and avoid functionalities. We also show preliminary results using our camera module images on neural networks for different scene understanding tasks.
As more and more cameras are used for machine perception, the optical design process still relies on key indicators such as point spread function (PSF), modulated transfer unction (MTF) based on aberration minimization. This process has proven efficient for human vision but is not tailored for machine perception. Given a specific computer vision task, it is not always necessary to target the same key performance indicators (KPIs) than when images are visualized by humans. Moreover, this image quality might change during a camera lifespan with the appearance of defocus for example. It is crucial to be able to determine how this kind of degradation can affect a computer vision task. In this work we study the impact of defocus on 2D object identification and show that, for a certain design, it is not impacted by image degradation under a certain threshold. We also demonstrate that this threshold is higher for lower f-number which makes them better design candidates.
Optical design process consists in minimizing aberrations using optimization methods. It relies on key performance indicators (KPIs), such as point spread function (PSF), Modulated transfer function (MTF), or relative illumination (RI) and spot sizes, that depend on lens elements aberrations. Their target values need to be defined -either for human or machine perception- at early stage of the design, which can be complex to do for challenging designs such as extended field of view. We developed an optical and imaging simulation pipeline able to render the effects of complex optical designs and image sensor on an initial aberration-free image. Extracting files from ray tracing software for simulating the PSF and sensor target information, the algorithm accurately renders off-axis aberrations with Zernike polynomials representation combined with noise contribution and relative illumination. The obtained image faithfully represents an optical system performance from the optics to the sensor component and we can then study the impact of additional aberration introduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.