Cleaning a point cloud building is challenging issue, it is crucial for a better representation of the scan-to-BIM 3D model. During the scan, the point cloud is in generally influenced by several factors. The scanner can provide false data due to reflections on reflective surfaces like mirrors, windows, etc. The false points can form a whole bunch of disturbing data which is not easy to detect. In this work, we use a statistical method called box plot to clean the data from false points. This method is a developed method of reading histograms. We test the proposed method on private database containing four point cloud buildings specifically designed for building information modeling (BIM) application. The experimental results are satisfying and our method detect most of the false points in the database.
KEYWORDS: Clouds, Filtering (signal processing), 3D modeling, Optical filters, Data modeling, Laser scanners, Signal processing, 3D image processing, Process modeling, Error analysis
Cleaning data is one of the most important tasks in data science and machine learning. It solves many problems in datasets, such as time complexity, added noise, and so on. In a huge datasets, outliers are extreme values that deviate from an overall pattern on a sample. Usually, they indicate variability in measurements or experimental errors. Depending on whether the entity is numeric or categorical, we can use different techniques to study its distribution to detect outliers. Like histogram, box plot and z-score, etc. This work aims to develop a modelbased method to detect undesirable points in a 3D point cloud representing a building. Our proposed method relies on the Z-score concept for filtering outliers which is well known in statistics as the standard score. The idea behind the use of this concept is to help to understand if the data value is above or below average and at what distance. More specifically, the Z-score indicates how many standard deviations away a data point is from the mean.
One of the most challenging tasks in computer vision is to emulate human cognitive ability to extract the salient object in a scene. We tackle the task of unsupervised salient video object segmentation using boundary connectedness and space-time salient regions. First, boundary prior measure is used to separate salient regions detected in both space and time. Then, background-foreground regions connectedness is computed and combined with appearance model via an iterative energy minimization framework to segment the salient moving object. For temporal consistency, the segmentation result of the current frame is used in addition to the optical flow and the boundary prior to segmenting the next frame. The experiments show a good performance of our algorithm for salient video object segmentation on benchmark datasets even in the presence of different challenges.
A particular algorithm for moving object detection using a background subtraction approach is proposed. We generate the background model by combining quad-tree decomposition with entropy theory. In general, many background subtraction approaches are sensitive to sudden illumination change in the scene and cannot update the background image in scenes. The proposed background modeling approach analyzes the illumination change problem. After performing the background subtraction based on the proposed background model, the moving targets can be accurately detected at each frame of the image sequence. In order to produce high accuracy for the motion detection, the binary motion mask can be computed by the proposed threshold function. The experimental analysis based on statistical measurements proves the efficiency of our proposed method in terms of quality and quantity. And it even outperforms substantially existing methods by perceptional evaluation.
This paper presents an algorithm for automatic segmentation of moving objects in video based on spatiotemporal visual saliency and an active contour model. Our algorithm exploits the visual saliency and motion information to build a spatiotemporal visual saliency map used to extract a moving region of interest. This region is used to automatically provide the seeds for the convex active contour (CAC) model to segment the moving object accurately. The experiments show a good performance of our algorithm for moving object segmentation in video without user interaction, especially on the SegTrack dataset.
This paper addresses the use of orthogonal polynomial basis transform in video classification due to its multiple advantages, especially for multiscale and multiresolution analysis similar to the wavelet transform. In our approach, we benefit from these advantages to reduce the resolution of the video by using a multiscale/multiresolution decomposition to define a new algorithm that decomposes a color image into geometry and texture component by projecting the image on a bivariate polynomial basis and considering the geometry component as the partial reconstruction and the texture component as the remaining part, and finally to model the features (like motion and texture) extracted from reduced image sequences by projecting them into a bivariate polynomial basis in order to construct a hybrid polynomial motion texture video descriptor. To evaluate our approach, we consider two visual recognition tasks, namely the classification of dynamic textures and recognition of human actions. The experimental section shows that the proposed approach achieves a perfect recognition rate in the Weizmann database and highest accuracy in the Dyntex++ database compared to existing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.