We present a spatially adaptive defogging algorithm for enhancement of color and visibility of unmanned aerial vehicle
(UAV) images. It is hard to identify an object of interest from airborne image acquired by a satellite, an airplane, and a
UAV because of various atmospheric distortions. For overcoming this problem, the proposed algorithm decomposes the
input foggy image into the original fog-free component and the atmospherically distorted component, and then estimates
the original image based on the image degradation model. We first generate a normalized image using the maximum
value among RGB color channels of a foggy Image. We estimate the atmospheric light in the labeled image. We also
generate a modified transmission map using the labeled image and a guided filter. A major contribution of the proposed
work is the enhancement of details using a guided filter as well as defogging. We can significantly enhance the visibility
of a foggy image by using the estimated atmospheric light and the transmission map. The proposed algorithm can
remove foggy components better than existing defogging techniques because the specular component and the labeled
image are used.
In this paper, we present a novel high dynamic range (HDR) imaging method using a single input image. Conventional
multiple image-based HDR methods are successful only on condition that there is no motion in the scene during the
acquisition of multiple, differently exposed low dynamic range (LDR) images. If these constraints are not satisfied, a
ghost artifact is produced in the resulting HDR image. In order to overcome these limitations, we generate multiple,
differently exposed LDR images from a single input image. We call these multiple images a set of layered exposed (LE)
images. In order to generate an appropriate set of LE images, the proposed method divides input image into 9 subregions
and computes local mean in each subregion to estimate the minimum and maximum local mean. The estimated local
means become ranges of histogram equalization (HE). HDR image is generated by fusing differently exposed LE images.
More specifically, given a set of LE images, we perform weighted fusion to produce the resulting HDR image, which is
inherently free from ghost artifacts since all LE images are geometrically identical. Experimental results show that the
proposed method outperforms the existing algorithms in the sense of both removing ghost artifacts and enhancement of
image contrast.
Detecting and identifying targets in aerial images has been a challenging problem due to various types of image
distortion factors, such as motion of a sensing device, weather variation, scale changes, and dynamic viewpoint. For
accurate, robust recognition of objects in unmanned aerial vehicle (UAV) videos, we present a novel target detection and
identification algorithm using part-template matching. The proposed method for target detection partitions the target into
part-templates by efficient extraction method based on target part regions. We also propose distribution distance
measurement-based target identification using the target part-template.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.