Coastal zones are at the interface of land and sea and provide a buffer to storms, wave action, and coastal inundation. Bathymetric mapping of the submerged littoral zone is essential for the understanding of sediment transport and good coastal management and planning. Surf zones are dynamic areas, ever-changing, so there is a need for low-cost, rapid response aerial remote sensing techniques that can provide high temporal and spatial coverage of nearshore bathymetry. However, this is a challenging task given water turbidity, wave action, seafoam, and other issues. With this motivation, this study used a small unoccupied aircraft system (UAS) equipped with a digital RGB camera to collect video footage of wave action on the water surface. The video data was then used to apply a spectral depth inversion algorithm called cBathy and estimate nearshore bathymetry at high resolution. Ground truth data were collected using cross-shore transect surveys to a depth of 2 m for assessment of the UAS-based bathymetry estimates. The video data was split into frames with a frequency of 2 frames per second (fps), and ground control points (GCPs) laid out in the scene were used to perform image georectification. A time stack of image pixel values was then generated from the video data for the cBathy depth inversion algorithm. Accuracy assessment resulted in an overall RMSE of 0.2056 m for an area of 390 m offshore and 400 m alongshore, and the maximum depth achieved was up to 3 m. Results show the potential of the cBathy algorithm to provide reasonable depth accuracies in dynamic and turbid water surf zones. However, results also show that this method has constraints for which users need to be aware of prior to applying it, including the study site’s physical characteristics.
Among emerging 3D scanning and imaging techniques that are commercially available, simultaneous localization and mapping (SLAM) is being substantially studied to generate 2D/3D maps of an unknown environment while reliably keeping track of the user’s pinpoint locations. Its ubiquitous mobility has demonstrated great mapping capabilities for infrastructures where vertical information may frequently be occluded using unmanned aircraft system (UAS) structure from-motion (SfM) photogrammetry. In addition, indoor mapping with terrestrial laser scanning can be a cumbersome task due to possible multiple scan locations. Intending to provide a cohesive 3D model by fusing point clouds collected via aerial SfM photogrammetry, terrestrial laser scanning (TLS), and SLAM, the purpose of this work is to assess the performance of SLAM point cloud generated by a proprietary mobile backpack laser scanner (BLS). Considering maximum scanning range and information integration strategy as variables, the point clouds generated by the BLS were evaluated against SfM and TLS datasets in terms of the internal consistency as well as external accuracy. TLS, SfM and SAM data collection efforts were made in a typical university campus environment. For the internal consistency, the SLAM-based point cloud with a maximum scanning range of 70 m presented a root mean square error (RMSE) of 2 mm. The SLAM+GNSS-based point cloud presented the lowest internal precision of RMSE = 0.861 m. The SLAM+GNSS 70 point cloud after a fine adjustment of misalignment presented the highest vertical accuracy with an RMSE = 0.069 m, while the point cloud generated from SfM photogrammetry presented RMSE = 0.297 m. The BLS was able to generate point cloud with an accuracy similar to GNSS-RTK surveying and it can be considered as a viable solution for indoor and outdoor mapping applications.
A small, fixed-wing unmanned aircraft system (UAS) was used to survey a replicated small plot field experiment designed to estimate sorghum damage caused by an invasive aphid. Plant stress varied among 40 plots through manipulation of aphid densities. Equipped with a consumer-grade near-infrared camera, the UAS was flown on a recurring basis over the growing season. The raw imagery was processed using structure-from-motion to generate normalized difference vegetation index (NDVI) maps of the fields and three-dimensional point clouds. NDVI and plant height metrics were averaged on a per plot basis and evaluated for their ability to identify aphid-induced plant stress. Experimental soil signal filtering was performed on both metrics, and a method filtering low near-infrared values before NDVI calculation was found to be the most effective. UAS NDVI was compared with NDVI from sensors onboard a manned aircraft and a tractor. The correlation results showed dependence on the growth stage. Plot averages of NDVI and canopy height values were compared with per-plot yield at 14% moisture and aphid density. The UAS measures of plant height and NDVI were correlated to plot averages of yield and insect density. Negative correlations between aphid density and NDVI were seen near the end of the season in the most damaged crops.
Beef production is the main agricultural industry in Texas, and livestock are managed in pasture and rangeland which are usually huge in size, and are not easily accessible by vehicles. The current research method for livestock location identification and counting is visual observation which is very time consuming and costly. For animals on large tracts of land, manned aircraft may be necessary to count animals which is noisy and disturbs the animals, and may introduce a source of error in counts. Such manual approaches are expensive, slow and labor intensive. In this paper we study the combination of small unmanned aerial vehicle (sUAV) and machine vision technology as a valuable solution to manual animal surveying. A fixed-wing UAV fitted with GPS and digital RGB camera for photogrammetry was flown at the Welder Wildlife Foundation in Sinton, TX. Over 600 acres were flown with four UAS flights and individual photographs used to develop orthomosaic imagery. To detect animals in UAV imagery, a fully automatic technique was developed based on spatial and spectral characteristics of objects. This automatic technique can even detect small animals that are partially occluded by bushes. Experimental results in comparison to ground-truth show the effectiveness of our algorithm.
KEYWORDS: Photogrammetry, RGB color model, Detection and tracking algorithms, Algorithm development, Agriculture, Systems modeling, Near infrared, Cameras, Accuracy assessment, Atomic force microscopy, Clouds, Genetics
Lodging has been recognized as one of the major destructive factors for crop quality and yield, particularly in corn. A variety of contributing causes, e.g. disease and/or pest, weather conditions, excessive nitrogen, and high plant density, may lead to lodging before harvesting season. Traditional lodging detection strategies mainly rely on ground data collection, which is insufficient in efficiency and accuracy. To address this problem, this research focuses on the use of unmanned aircraft systems (UAS) for automated detection of crop lodging. The study was conducted over an experimental corn field at the Texas A and M AgriLife Research and Extension Center at Corpus Christi, Texas, during the growing season of 2016. Nadir-view images of the corn field were taken by small UAS platforms equipped with consumer grade RGB and NIR cameras on a per week basis, enabling a timely observation of the plant growth. 3D structural information of the plants was reconstructed using structure-from-motion photogrammetry. The structural information was then applied to calculate crop height, and rates of growth. A lodging index for detecting corn lodging was proposed afterwards. Ground truth data of lodging was collected on a per row basis and used for fair assessment and tuning of the detection algorithm. Results show the UAS-measured height correlates well with the ground-measured height. More importantly, the lodging index can effectively reflect severity of corn lodging and yield after harvesting.
KEYWORDS: Visual process modeling, Clouds, LIDAR, Visualization, RGB color model, Atomic force microscopy, Data modeling, 3D modeling, Vegetation, Cameras
This paper explores the potential of using unmanned aircraft system (UAS)-based visible-band images to assess cotton growth. By applying the structure-from-motion algorithm, the cotton plant height (ph) and canopy cover (cc) information were retrieved from the point cloud-based digital surface models (DSMs) and orthomosaic images. Both UAS-based ph and cc follow a sigmoid growth pattern as confirmed by ground-based studies. By applying an empirical model that converts the cotton ph to cc, the estimated cc shows strong correlation (R2=0.990) with the observed cc. An attempt for modeling cotton yield was carried out using the ph and cc information obtained on June 26, 2015, the date when sigmoid growth curves for both ph and cc tended to decline in slope. In a cross-validation test, the correlation between the ground-measured yield and the estimated equivalent derived from the ph and/or cc was compared. Generally, combining ph and cc, the performance of the yield estimation is most comparable against the observed yield. On the other hand, the observed yield and cc-based estimation produce the second strongest correlation, regardless of the complexity of the models.
KEYWORDS: LIDAR, Visibility, Clouds, Data modeling, Vegetation, Visual process modeling, Image segmentation, Data acquisition, 3D modeling, Systems modeling
Point cloud data collected by small-footprint lidar scanning systems have proven effective in modeling the forest canopy for extraction of tree parameters. Although line-of-sight visibility (LOSV) in complex forests may be important for military planning and search-and-rescue operations, the ability to estimate LOSV from lidar scanners is not well developed. A new estimator of below-canopy LOSV (BC-LOSV) by addressing the problem of estimation of lidar under-sampling of the forest understory is created. Airborne and terrestrial lidar scanning data were acquired for two forested sites in order to test a probabilistic model for BC-LOSV estimation solely from airborne lidar data. Individual crowns were segmented, and allometric projections of the probability model into the lower canopy and stem regions allowed the estimation of the likelihood of the presence of vision-blocking elements for any given LOSV vector. Using terrestrial lidar scans as ground truth, we found an approximate average absolute difference of 20% between BC-LOSV estimates from the airborne and terrestrial point clouds, with minimal bias for either over- or underestimates. The model shows the usefulness of a data-driven approach to BC-LOSV estimation that depends only on small-footprint airborne lidar point cloud and physical knowledge of tree phenology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.