Dr. Besma Abidi has over 30 years of experience in research, development, solution architecting, and prototyping of end-to-end solutions to real-time EO/IR/SAR sensing and processing problems. Dr Abidi has honed her expertise in computer vision, machine learning (ML), artificial intelligence (AI), and edge computing. Her design skills encompass sensor-in-the-loop real-time 2D and 3D computer vision, machine learning for object detection and tracking, image segmentation, sensor fusion, biometrics, MLOps pipelines, low-code/no-code ML, and large-scale data and multi-spectral processing. On the hardware side, she specializes in embedded designs on FPGA and GPU, architecting hybrid systems (CPU/GPU/FPGA) and creating solutions that meet extreme low SWaP (size, weight, and power) requirements for real-time edge processing. Her recent projects have delivered cloud and edge solutions for full-motion video processing, including video enhancement, atmospheric distortion correction, object detection and tracking, and AI-driven video compression and streaming in low-bandwidth environments. These solutions, designed for multiple spectral bands and domains (ground, sea, air, and space), incorporate advanced features like face and license plate recognition, tail number and hull number recognition, and smart video compression.
Dr. Abidi currently provides consulting services, subject matter expertise, and project management in computer vision, AI/ML, perception, sensor fusion, robotics, autonomy, and edge-AI, to include solution architecting, team leadership, and technical white papers and proposal drafting
Publications (14)
This will count as one of your downloads.
You will have access to both the presentation and article (if available).
KEYWORDS: Video, Cameras, Video surveillance, Video processing, High dynamic range imaging, Sensors, Surveillance, Visibility, Data acquisition, Night vision
We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.
Most existing face recognition algorithms require face images with a minimum resolution. Meanwhile, the rapidly
emerging need for near-ground long range surveillance calls for a migration in face recognition from close-up distances
to long distances and accordingly from low and constant resolution to high and adjustable resolution. With limited
optical zoom capability restricted by the system hardware configuration, super-resolution (SR) provides a promising
solution with no additional hardware requirements. In this paper, a brief review of existing SR algorithms is conducted
and their capability of improving face recognition rates (FRR) for long range face images is studied. Algorithms
applicable to real-time scenarios are implemented and their performances in terms of FRR are examined using the IRISLRHM
face database [1]. Our experimental results show that SR followed by appropriate enhancement, such as wavelet
based processing, is able to achieve comparable FRR when equivalent optical zoom is employed.
Digital imaging systems with extreme zoom capabilities are traditionally found in astronomy and wild life monitoring.
More recently, the need for such capabilities has extended to long range surveillance and wide area monitoring such as
forest fires, airport perimeters, harbors, and waterways. Auto-focusing is an indispensable function for imaging systems
designed for such applications. This paper studies the feasibility of an image based passive auto-focusing control for
high magnification systems based on off-the-shelf telescopes and digital cameras/camcorders, with concentration on two
associated elements: the cost function (usually the image sharpness measure) and the search strategy. An extensive
review of existing sharpness measures and search algorithms is conducted and their performances compared. In
addition, their applicability and adaptability to a wide range of high magnifications (50×~1500×) are addressed. This
study builds up the foundation for the development of auto-focusing schemes with particular applications to high
magnification systems.
A surveillance system that detects and tracks security breaches in airports is presented. The system consists of two subsystems with one overhead static and one Pan/Tilt/Zoom (PTZ) camera to first acquire and then follow an intruder who illegally walks into a crowded secure area of an airport. The overhead camera detects the intruder using a motion-based segmentation and an optical flow algorithm. Intruder handover from the overhead camera to the PTZ camera is then performed. A novel approach for intruder handover and feature extraction using color is presented for continuous tracking with the PTZ camera when the intruder moves out of the view of the overhead camera. We also use a mean shift filter with a newly designed non-rectangular search window which will be automatically updated to accurately localize the target. Real experimental results from a local airport are given and discussed.
An energy minimizing snake algorithm that runs over a grid is designed and used to reconstruct high resolution 3D human faces from pairs of stereo images. The accuracy of reconstructed 3D data from stereo depends highly on how well stereo correspondences are established during the feature matching step. Establishing stereo correspondences on human faces is often ill posed and hard to achieve because of uniform texture, slow changes in depth, occlusion, and lack of gradient. We designed an energy minimizing algorithm that accurately finds correspondences on face images despite the aforementioned characteristics. The algorithm helps establish stereo correspondences unambiguously by applying a coarse-to-fine energy minimizing snake in grid format and yields a high resolution reconstruction at nearly every point of the image. Initially, the grid is stabilized using matches at a few selected high confidence edge points. The grid then gradually and consistently spreads over the low gradient regions of the image to reveal the accurate depths of object points. The grid applies its internal energy to approximate mismatches in occluded and noisy regions and to maintain smoothness of the reconstructed surfaces. The grid works in such a way that with every increment in reconstruction resolution, less time is required to establish correspondences. The snake used the curvature of the grid and gradient of image regions to automatically select its energy parameters and approximate the unmatched points using matched points from previous iterations, which also accelerates the overall matching process. The algorithm has been applied for the reconstruction of 3D human faces, and experimental results demonstrate the effectiveness and accuracy of the reconstruction.
Very few image processing applications have dealt with x-ray luggage scenes in the past. Concealed threats in general, and low-density items in particular, pose a major challenge to airport screeners. A simple enhancement method for data decluttering is introduced. Initially, the method is applied using manually selected thresholds to progressively generate decluttered slices. Further automation of the algorithm, using a novel metric based on the Radon transform, is conducted to determine the optimum number and values of thresholds and to generate a single optimum slice for screener interpretation. A comparison of the newly developed metric to other known metrics demonstrates the merits of the new approach. On-site quantitative and qualitative evaluations of the various decluttered images by airport screeners further establishes that the single slice from the image hashing algorithm outperforms traditional enhancement techniques with a noted increase of 58% in low-density threat detection rates.
This paper proposes a real-time digital auto-focusing algorithm using a priori estimated set of point spread functions
(PSFs). A priori set of PSFs are estimated by establishing the relation between two-dimensional PSF and onedimensional
step response whose elements are samples of profile of degraded step edge. From the priori estimated set,
the proposed auto-focusing algorithm can select the optimal PSF by the focusing criterion based on the frequency
domain analysis. We then use the constrained least square (CLS) filter to obtain the in-focused image with the estimated
optimal PSF. The proposed algorithm can be implemented in real-time because the set of PSFs are already estimated and
the filtering is performed in the frequency domain.
This paper proposes a fully digital auto-focusing algorithm for restoring the image with differently out-of-focused objects, which can restore background as well as all objects. In this paper, we assume that out-of-focus blur is isotropic such as circle of confusion (COC) or two-dimensional Gaussian blur. Therefore, the proposed algorithm can segment and estimate the point spread function (PSF) by using the size of ramp in the one-dimensional step response. The proposed algorithm can be developed by object-based image segmentation and restoration algorithm. Experimental results show that the proposed object-based image restoration algorithm can efficiently remove the space-variant out of focus blur from the image with multiple blurred objects.
Automatic tracking is essential for a 24 hours intruder-detection and, more generally, a surveillance system. This paper presents an adaptive background generation and the corresponding moving region detection techniques for a Pan-Tilt-Zoom (PTZ) camera using a geometric transform-based mosaicing method. A complete system including adaptive background generation, moving regions extraction and tracking is evaluated using realistic experimental results. More specifically, experimental results include generated background images, a moving region, and input video with bounding boxes around moving objects. This experiment shows that the proposed system can be used to monitor moving targets in widely open areas by automatic panning and tilting in real-time.
Very few image processing applications dealt with x-ray luggage scenes in the past. In this paper, a series of common image enhancement techniques are first applied to x-ray data and results shown and compared. A novel simple enhancement method for data de-cluttering, called image hashing, is then described. Initially, this method was applied using manually selected thresholds, where progressively de-cluttered slices were generated and displayed for screeners. Further automation of the hashing algorithm (multi-thresholding) for the selection of a single optimum slice for screener interpretation was then implemented. Most of the existing approaches for automatic multi-thresholding, data clustering, and cluster validity measures require prior knowledge of the number of thresholds or clusters, which is unknown in the case of luggage scenes, given the variety and unpredictability of the scene’s content. A novel metric based on the Radon transform was developed. This algorithm finds the optimum number and values of thresholds to be used in any multi-thresholding or unsupervised clustering algorithm. A comparison between the newly developed metric and other known metrics for image clustering is performed. Clustering results from various methods demonstrate the advantages of the new approach.
The commercial face recognition software FaceIt Identification and Surveillance was evaluated using the Facial Recognition Technology (FERET) database. The experimental results show the performance of FaceIt with variations in illumination, expression, age, head size, pose, and the size of the database which all remain difficult problems in face recognition technology.
This paper describes an algorithm for the automatic segmentation and representation of surface structures and non-uniformities in an industrial setting. The automatic image processing and analysis algorithm is developed as part of a complete on-line web characterization system of a paper making process at the wet end. The goal is to: (1) link certain types of structures on the surface of the web to known machine parameter values, and (2) find the connection between detected structures at the beginning of the line and defects seen on the final product. Images of the pulp mixture, carried by a fast moving table, are obtained using a stroboscopic light and a CCD camera. This characterization algorithm succeeded where conventional contrast and edge detection techniques failed due to a poorly controlled environment. The images obtained have poor contrast and contain noise caused by a variety of sources.
The paper industry has long had a need to better understand and control its papermaking process upstream, specifically at the wet end in the forming section of a paper machine. A vision-based system is under development that addresses this need by automatically measuring and interpreting the pertinent paper web parameters at the wet end in real time. The wet-end characterization of the paper web by a vision system involves a 4D measurement of the slurry in real time. These measurements include the 2D spatial information, the intensity profile, and the depth profile. This paper describes the real-time depth profile measurement system for the high-speed moving slurry. A laser line-based measurement method is used with a high-speed programmable camera to directly measure slurry height. The camera is programmed with a profile algorithm, producing depth data at fast sampling rates. Analysis and experimentation have been conducted to optimize the system for the characteristics of the slurry and laser line image. On-line experimental results are presented.
Active sensing is the process of exploring the environment using multiple views of a scene captured by sensors from different points in space under different sensor settings. Applications of active sensing are numerous and can be found in the medical field (limb reconstruction), in archeology (bone mapping), in the movie and advertisement industry (computer simulation and graphics), in manufacturing (quality control), as well as in the environmental industry (mapping of nuclear dump sites). In this work, the focus is on the use of a single vision sensor (camera) to perform the volumetric modeling of an unknown object in an entirely autonomous fashion. The camera moves to acquire the necessary information in two ways: (a) viewing closely each local feature of interest using 2D data; and (b) acquiring global information about the environment via 3D sensor locations and orientations. A single object is presented to the camera and an initial arbitrary image is acquired. A 2D optimization process is developed. It brings the object in the field of view of the camera, normalizes it by centering the data in the image plane, aligns the principal axis with one of the camera's axes (arbitrarily chosen), and finally maximizes its resolution for better feature extraction. The enhanced image at each step is projected along the corresponding viewing direction. The new projection is intersected with previously obtained projections for volume reconstruction. During the global exploration of the scene, the current image as well as previous images are used to maximize the information in terms of shape irregularity as well as contrast variations. The scene on the borders of occlusion (contours) is modeled by an entropy-based objective functional. This functional is optimized to determine the best next view, which is recovered by computing the pose of the camera. A criterion based on the minimization of the difference between consecutive volume updates is set for termination of the exploration procedure. These steps are integrated into the design of an off-line Autonomous Model Construction System AMCS, based on data-driven active sensing. The system operates autonomously with no human intervention and with no prior knowledge about the object. The results of this work are illustrated using computer simulation applied to intensity images rendered by ray-tracing software package.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.