Over 795,000 Americans suffer a stroke every year, leading to a death every 3.5 minutes. Approximately 87% of all strokes are Acute Ischemic Strokes (AIS), i.e., an abrupt interruption in cerebral circulation due to a blocked artery. Early prediction of AIS final outcomes (AIS lesions) is crucial to effective treatment planning for AIS patients. Due to its speed, availability, and lack of contraindications, Computed Tomography Perfusion (CTP) is preferred over other imaging modalities with higher resolution (e.g., MRI), for AIS lesion prediction. However, the low contrast of baseline CTP images makes it difficult to determine AIS lesions precisely, while follow-up MRI images do. Therefore, this paper proposes a method of synthesizing follow-up MRI images from baseline CTP scans by a Temporal Generative Adversarial Network (TGAN) — which encodes baseline CTP frames with a series of encoders, followed by a decoder that forecasts the high-resolution follow-up MRIs. It also uses a discriminator that competes with the generator to identify whether its input MRI is real or fake. Furthermore, our TGAN includes a segmentor that can identify AIS lesions in those synthesized MRI images. The generator, discriminator, and segmentor in TGAN each use MultiRes U-Nets, an extension of the original U-Net architecture, which can robustly segment objects of various scales and shapes. Our experiments with Leave-One-person-Out Cross-Validation (LOOCV) obtained an average dice coefficient of 56.73%, with a significant p<0.05. In comparison to traditional methods using CTP perfusion parameters, we found that our novel method was more accurate in predicting AIS lesions.
Colour coding is known to have a significant impact on navigation tasks in the real world by showing the way where to go or by highlighting certain regions in a building where specific activities are performed. Colour coding is frequently used in large buildings, such as hospitals, to help patients navigate and locate clinics. However, it is still not well known how colour coding works in virtual reality systems, or its influence on tasks such as navigation and situation awareness. In this experiment, we explore the impact of colour coding on a navigation task by comparing participants’ performances in a virtual world. Five different mazes with the similar level of complexity are attributed to various schemes. In the first experiment, participants are asked to find the exit in a virtual maze without any assistance; in the second and third experiments, participants are provided with a two-dimensional (2D) map with and without a global positioning system (GPS) as a guide to find the exit; in the fourth and fifth experiments, participants are provided with a 2D map with the selected colours embedded along the routes and with and without a GPS to indicate direction. The experimental results will provide evidence on how the colour-coding scheme influences user performance in a virtual world navigation task. This may have a strong influence on the future design of virtual reality training systems.
Augmented Reality (AR) is a departure from standard virtual reality in a sense that it allows users to see computer generated virtual objects superimposed over the real world through the use of see-through head-mounted display. Users of such system can interact in the real/virtual world using additional information, such as 3D virtual models and instructions on how to perform these tasks in the form of video clips, annotations, speech instructions, and images. In this paper, we describe two prototypes of a collaborative industrial Tele-training system. The distributed aspect of this system will enables users on remote sites to collaborate on training tasks by sharing the view of the local user equipped with a wearable computer. The users can interactively manipulate virtual objects that substitute real objects allowing the trainee to try out and discuss the various tasks that needs to be performed. A new technique for identifying real world objects and estimating their coordinates in 3D space is introduced. The method is based on a computer vision technique capable of identifying and locating Binary Square Markers identifying each information stations. Experimental results are presented.
The use of a laser range sensor in the 3D part digitalization process for inspection tasks allows very significant improvement in acquisition speed and in 3D measurement points density but does not equal the accuracy obtained with a coordinate measuring machine (CMM). Inspection consists in verifying the accuracy of a part related to a given set of tolerances. It is thus necessary that the 3D measurements be accurate. In the 3D capture of a part, several sources of error can alter the measured values. So, we have to find and model the most influent parameters affecting the accuracy of the range sensor in the digitalization process. This model is used to produce a sensing plan to acquire completely and accurately the geometry of a part. The sensing plan is composed of the set of viewpoints which defines the exact position and orientation of the camera relative to the part. The 3D cloud obtained from the sensing plan is registered with the CAD model of the part and then segmented according to the different surfaces. Segmentation results are used to check tolerances of the part. By using the noise model, we introduce a dispersion value for each 3D point acquired according to the sensing plan. This value of dispersion is shown as a weight factor in the inspection results.
KEYWORDS: Space operations, Sensors, Space robots, Robotics, Control systems, Mining, Telecommunications, Virtual reality, Data communications, Prototyping
A set of tolls addressing the problems specific to the control and monitoring of remote robotic systems from extreme distances has been developed. The tools include the capability to model and visualize the remote environment, to generate and edit complex task scripts, to execute the scripts to supervisory control mode and to monitor and diagnostic equipment from multiple remote locations. Two prototype systems are implemented for demonstration. The first demonstration, using a prototype joint design called Dexter, shows the applicability of the approach to space robotic operation in low Earth orbit. The second demonstration uses a remotely controlled excavator in an operational open-pit tar sand mine. This demonstrates that the tools developed can also be used for planetary exploration operations as well as for terrestrial mining applications.
KEYWORDS: 3D modeling, Virtual reality, Data modeling, 3D metrology, Environmental sensing, Mining, Optical imaging, Image sensors, Sensors, 3D image processing
The key to navigate in a 3D environment or designing autonomous vehicles that can successfully maneuver and manipulate objects in their environment is the ability to create, maintain, and use effectively a 3D digital model that accurately represents its physical counterpart. Virtual exploring of real places and environments, either for leisure, engineering design, training and simulation, or tasks in remote or hazardous environments, is more effective and useful if geometrical relationships and dimensions in the virtual model are accurate. A system which can rapidly, reliably, remotely and accurately perform measurements in the 3D space for the mapping of indoor environments is needed for many applications. In this paper we present a mobile mapping system that is designed to generate a geometrically precise 3D model of an unknown indoor environment. The same general design concept can be used for environments ranging from simple office hallways to long winding underground mine tunnels. Surfaces and features can be accurately mapped from images acquired by a unique configuration of different types of optical imaging sensor and dead reckoning positioning device. This configuration guarantees that all the information required to create the 3D model of the environment is included in the collected data. Sufficient overlaps between 2D intensity images, in combination with information from 3D range images, insure that the complete environment can be accurately reconstructed when all the data is simultaneously processed. The system, the data collection and processing procedure, test results, the modeling and display at our virtual environment facility are described.
A new automated inspection algorithm of rapid prototyping parts using a 3D laser range sensor is described. The input to the program is a series of trimmed NURBS saved in an IGES format or an STL file and an unordered series of measurements produced by a 3D optical sensor. The output is an inspection report indicating the level of discrepancy between the measured points and the model. Using a color scheme, an operator can rapidly identify problems in the rapid prototyping process and validate if the produced part is conform the intended design. At the base of the method, a new robust correspondence algorithm which can find the rigid transformation between the tessellated model of the part and the measured points, is presented. This method is based on a least median square norm capable of a robustness of up to 50 percent. The robustness of the method is essential since one cannot guarantee that in practice, all the points in the measured set belong to the model. These types of algorithms are usually quite costly in computational complexity, but we will show that one can speed-up these algorithms by using the well-known iterative closest points algorithm and a multi-resolution scheme based on planar surface tessellation and voxels.
Laser range sensors measure the 3D coordinates of points on the surface of objects. Range images taken from different points of view can provide a more or less complete coverage of an object's surface. The geometric information carried by the set of range images can be integrated into a unified, non-redundant triangular mesh describing the object. This model can then be used as the input to rapid prototyping or machining systems in order to produce a replica. Direct replication proves particularly useful for complex sculptured surfaces. The paper will describe the proposed approach and relevant algorithms, and discuss tow test cases.
In this paper, we describe a new automated inspection method of parts produced by rapid prototyping machines using a 3-D range sensor. The input to the program is a tessellated representation of the part at a desired resolution saved in the standard STL format and an un-ordered series of measurements produced by a 3-D optical sensor. The output is an inspection report indicating the level of discrepancy between the measured points and the model. Using this inspection system an operator of a rapid prototyping machine can rapidly identify fabrication defects or monitor process drift during conversion process. At the base of the method, a new robust correspondence algorithm which can find the rigid transformation between the tessellated model of the part and the measured points, is presented. This method is based on a least median square norm capable of a robustness of up to 50%. The robustness of the method is essential since one cannot guaranty that in practice, all the points in the measured set belong to the model. These types of algorithms are usually quite costly in computational complexity but we show that one can speed-up these algorithms by using the well-known iterative closest point algorithm (ICP) and a multi-resolution representation scheme based on voxels. We analyze in detail the complexity of the algorithm. We also present experimental results on a complex part with global and local inspection processes.
A new automated inspection algorithm of industrials parts using a 3D laser range sensor is described. The input to the program is a tessellated representation of the part at a desired resolution saved in a neutral STL format and an unordered series of measurements produced by a 3D optical sensor. The output is a colored version of the model indicating the level of discrepancy between the measured points and the model. Using this coloring scheme, an operator or a robotic system can rapidly identify defective parts or monitor process drift on a production line. At the base of the method, a new robust correspondence algorithm which can find the rigid transformation between the tessellated model of the part and the measured points, is presented. This method is based on a least median square norm capable of a robustness of up to 50%. The robustness of the method is essential since one cannot guarantee that in practice, all the points in the measured set belong to the model. These types of algorithms are usually quite costly in computational complexity, but we will show that one can speed-up these algorithms by using the well-known iterative closest points algorithm and a multiresolution scheme based on voxels.
The ability to estimate the position of a mobile vehicle is a key task for navigation over large distances in complex indoor environments such as nuclear power plants. Schematics of the plants are available, but they are incomplete, as real settings contain many objects, such as pipes, cables or furniture, that mask part of the model. The position estimation method described in this paper matches 3-D data with a simple schematic of a plant. It is basically independent of odometry information and viewpoint, robust to noisy data and spurious points and largely insensitive to occlusions. The method is based on a hypothesis/verification paradigm and its complexity is polynomial; it runs in (Omicron) (m4n4), where m represents the number of model patches and n the number of scene patches. Heuristics are presented to speed up the algorithm. Results on real 3-D data show good behavior even when the scene is very occluded.
This paper presents a new multiscale edge detection algorithm. The algorithm is based on a new nonlinear filter, which produces a scale-space filtering analogous to Gaussian filtering but has several interesting properties such as viewpoint invariance and automatic edge preservation. From this multiscale representation, the algorithm uses a multidimensional morphological operator to compute the position of edges. A mathematical analysis of the algorithm and its efficient software implementation is discussed. Experimental results illustrating the use of the filter to detect multiscale depth and orientation discontinuities on range images and significant edges on intensity images are also presented.
This paper presents a new method capable of segmenting range images into a set of Bezier surface patches directly compatible with most CAD systems. The algorithm is divided into four parts. First, an initial partition of the data set into regions, following a third-order Bezier model, is performed using a robust fitting algorithm constrained by the position of depth and orientation discontinuities. Second, an optimal region growing based on a new Bayesian decision criteria is computed. Third, generalization to a higher-order surface model is performed based on a statistical decision method. Fourth, at the final resolution, an approximation of the surface boundary is computed using a two-dimensional B-spline. The algorithm is fully automatic and does not require adhoc parameter adjustment. Experimental results are presented.
This paper presents a new method for the extraction of a rational Bezier surface from a set of data points. The algorithm is divided into four parts. First, a least median square fitting algorithm is used to extract a Bezier surface from the data set. Second, from this initial surface model an analysis of the data set is performed to eliminate outliers. Third, the algorithm then improves the fit over the residual points by modifying the weights of a rational Bezier surface using a non-linear optimization method. A further improvement of the fit is achieved using a new intrinsic parameterization technique. Fourth, an approximation of the region boundary is performed using a NURB with knots. Experimental results show that the current algorithm is robust and can precisely approximate complex surfaces.
Two processing methods using range data are shown to perform different navigation tasks for a mobile vehicle. The first, a low-level processing method based on mathematical morphology, computes in real time, the free space and is used for collision avoidance. A parallel between this method and polar histogram techniques is drawn. The second method, based on a hierarchical segmentation technique, can extract a multiple resolution description of the range data produced by the sensor. This segmentation is used to describe the immediate environment of the vehicle using simple geometrical primitives, refine the vehicle position estimate, and create a detailed representation of the immediate environment of the vehicle. Experimental results using the BIRIS range sensor are shown.
KEYWORDS: Image segmentation, Image processing algorithms and systems, Data modeling, Error analysis, Robots, Machine vision, Computer vision technology, Robot vision, Detection and tracking algorithms, Signal to noise ratio
This paper describes recent work on hierarchical segmentation of range images. The algorithm starts with an initial partition of small planar regions using a robust fitting method constrained by the detection of depth and orientation discontinuities. From this initial partition represented by an adjacency graph structure, we optimally group these regions into larger and larger regions until an approximation limit is reached. The algorithm uses Bayesian decision theory to determine the local optimal grouping and the geometrical complexity of the approximation surface. This algorithm produces a hierarchical structure that can be used to represent objects with a varying level of detail by scanning through the hierarchical structure generated. Experimental results are presented.
This paper presents a new type of adaptive smoothing technique for range images based on intrinsic properties of differential geometry. The concept of intrinsic surface distance and the parallel transport theorem were used to design a filter that is invariant to the viewpoint and is capable of preserving depth and orientation discontinuities. Experimental results are presented. 1
We present a multiscale edge- and region-based segmentation scheme for range images which leads to a rich representation in terms of focused edges and constant-sign curvature regions. At a given scale the edge detection scheme extracts the discontinuities in surface depth and orientation. The extraction is accomplished by first detecting the presence of significant edges at that scale and then determining their precise location by tracking them over decreasing scale down to the finest scale at which they are most accurately positioned. At the same scale the region-based segmentation scheme consists of applying anisotropic filtering to the regions delimited by the previously detected edges and segmenting these regions into surface primitives of constant Gaussian and mean curvature signs. Experimental results are presented for both synthetic and real images and a comparison is done with some recently published techniques. 1.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.