Volumetric additive manufacturing (VAM) via tomographic projection is an emerging platform for ultra-rapid 3D printing. By projecting all layers in parallel, print times orders of magnitude faster than standard polymer 3D printing can be easily achieved, without the need for support scaffolds. Despite these advantages, print results are in certain cases inferior to commercial vat polymerization due to the infancy of the technique. In this talk, we will outline recent progress made at the National Research Council of Canada to extend the capabilities of VAM and address inherent challenges in VAM. Topics covered will include the role of depletion and polymerization kinetics on print quality and novel resins for larger, faster, and functional prints.
In this talk, we present a new methodology for computing projections in tomographic additive manufacturing. Currently, tomographic printing systems require that light-rays in the printing volume are parallel, and have low etendue. In this work, we show that accurate modeling of the light rays through the print volume enables improved printing in systems with diverging beams. We also demonstrate that ray-tracing can compensate for non-parallel projection in 3D. We anticipate that our ray-tracing methodology will relax the hardware requirements necessary in the conventional Radon-based approach, and enable a broader range of tomographic printing configurations.
In this talk, we will present our observations of printing kinetics in light-based tomographic additive manufacturing using optical scattering tomography. In particular we report a feature-size dependence on polymerization time that contributes significantly to errors in the printed object: small features tend to polymerize more slowly than large features. Therefore, prints are either missing small features or large features are overexposed. We investigate the cause of this feature size polymerization time dependence and present techniques to correct for these errors.
Tomographic printing is a 3D printing technique that enables fast, supportless fabrication wherein light is projected through a rotating vial containing a photocurable resin. Usually, vial is placed in an index-matching bath to eliminate refraction at the vial surface. In this talk we will describe our approach to build an easy-to-use tomographic printing system that eliminates the index-matching bath. We use a computational ray-tracing approach to pre-distort projection images to exactly counteract the distortion from refraction at the air/vial interface and projector non-telecentricity. We will show simulation and print examples and expand on recent improvements in our system.
One of the challenges in high-precision manufacturing is constant inspection as well as efficient communication of the inspection results to the workers. In this context, we are presenting a multi-modal 3D imaging system designed for computer-assisted assembly manufacturing using augmented reality. The three-dimensional measurement subsystem is a structured-light system based on a digital micro-mirror device (DMD). The augmented reality imagery is displayed on the components being manufactured using another DMD-based color projector that uses wavelengths that do not interfere with the 3D measurements. A thermal camera is also part of the system and calibrated with respect to the measurement and projection subsystems. In typical target usage, the system can display localized shape deviation with respect to nominal values, or the surface temperature across the component, or any information obtained or derived from the subsystems. Moreover, it can be used to display assembly instructions and validate the compliance of the final manufactured component.
This paper presents an industrial augmented reality system that simultaneously measures a component, identifies possible defects and displays the inspection result directly on the component. The processing is done in real time using a single DMD-based projector for both the inspection and augmented reality. The use of a single DMD eliminates the issue of registration between the component being inspected and an auxiliary augmented reality projector. The use of a single projection system also eliminates possible occlusion due to parallax between both projection systems. The system uses an algorithm that computes at video frame rate the temporal sequences of micromirror positions that, when imaged by a high-speed camera, contains a set of structured-light patterns. The temporal sequences are designed such that a human observer sees the desired augmented information. The proposed prototype can acquire 12 range images per second. The range uncertainty at 1-σ is 14 μm and each range image contains approximately one million 3D points.
This paper presents two structured-light 3D imaging systems that use a quasi-analogue projection subsystem based on a combination of digital micromirror device (DMD), optics, digital processing and calibration procedure. The first system is a high-resolution prototype that acquires 12M 3D point per frame; the second one is a high-speed prototype that generates 150 3D frames per second, with 2M 3D points per frame. The projection subsystem can produce high frame rates, high contrast and high resolution patterns using off-the-shelf components. The structured light patterns used are the same combination of binary and sine wave fringes as those usually encountered in commercial systems. The proposed systems generate the sine wave patterns using a single binary image, thereby exploiting the high frame rate of the DMD.
Range sensors have drawn much interest for human activity related research since they provide explicit 3D information about the shape that is invariant to clothing, skin color and illumination changes. However, triangulationbased systems like structured-light sensors generate occlusions in the image when parts of the scene cannot be seen by both the projector and the camera. Those occlusions, as well as missing data points and measurement noise, depend on the structured-light system design. These artifacts add a level of difficulty to the task of human body segmentation that is typically not addressed in the literature. In this work, we design a segmentation model that is able to reason about 3D spatial information, to identify the different body parts in motion and is robust to artifacts inherent to the structured-light system, such as triangulation occlusions, noise and missing data. First, we build the first realistic sensor-specific training set by closely simulating the actual acquisition scenario with the same intrinsic parameters as our sensor and the artifacts it generates. Second, we adapt a state-of-the-art fully convolutional network to range images of the human body in order for it to transfer its learning toward 3D spatial information instead of light intensities. Third, we quantitatively demonstrate the importance of simulating sensor-specific artifacts in the training set to improve the robustness of the segmentation of actual range images. Finally, we show the capability of the model to accurately segment human body parts on real range image sequences acquired by our structured light sensor, with high inter-frame consistency and in real-time.
We present a parallel implementation of a statistical shape model registration to 3D ultrasound images of the
lumbar vertebrae (L2-L4). Covariance Matrix Adaptation Evolution Strategy optimization technique, along
with Linear Correlation of Linear Combination similarity metric have been used, to improve the robustness and
capture range of the registration approach. Instantiation and ultrasound simulation have been implemented on
a graphics processing unit for a faster registration. Phantom studies show a mean target registration error of 3.2
mm, while 80% of all the cases yield target registration error of below 3.5 mm.
Three-dimensional models of the spine are very important in diagnosing, assessing, and studying spinal deformities.
These models are generally computed using multi-planar radiography, since it minimizes the radiation dose
delivered to patients and allows them to assume a natural standing position during image acquisition. However,
conventional reconstruction methods require at a minimum two sufficiently distant radiographs (e.g., posterior-anterior
and lateral radiographs) to compute a satisfactory model. Still, it is possible to expand the applicability
of 3D reconstructions by using a statistical model of the entire spine shape. In this paper, we describe a reconstruction
method that takes advantage of a multi-body statistical model to reconstruct 3D spine models. This
method can be applied to reconstruct a 3D model from any number of radiographs and can also integrate prior
knowledge about spine length or preexisting vertebral models. Radiographs obtained from a group of 37 scoliotic
patients were used to validate the proposed reconstruction method using a single posterior-anterior radiograph.
Moreover, we present simulation results where 3D reconstructions obtained from two radiographs using the proposed
method and using the direct linear transform method are compared. Results indicate that it is possible
to reconstruct 3D spine models from a single radiograph, and that its accuracy is improved by the addition of
constraints, such as a prior knowledge of spine length or of the vertebral anatomy. Results also indicate that the
proposed method can improve the accuracy of 3D spine models computed from two radiographs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.