In a project to meet requirements for CBP Laboratory analysis of footwear under the Harmonized Tariff Schedule of the
United States (HTSUS), a hybrid metrology system comprising both optical and touch probe devices has been
assembled. A unique requirement must be met: To identify the interface-typically obscured in samples of concern-of
the "external surface area upper" (ESAU) and the sole without physically destroying the sample. The sample outer
surface is determined by discrete point cloud coordinates obtained using laser scanner optical measurements.
Measurements from the optically inaccessible insole region are obtained using a coordinate measuring machine (CMM).
That surface similarly is defined by point cloud data.
Mathematically, the individual CMM and scanner data sets are transformed into a single, common reference frame.
Custom software then fits a polynomial surface to the insole data and extends it to intersect the mesh fitted to the outer
surface point cloud. This line of intersection defines the required ESAU boundary, thus permitting further fractional
area calculations to determine the percentage of materials present.
With a draft method in place, and first-level method validation underway, we examine the transformation of the two
dissimilar data sets into the single, common reference frame. We also will consider the six previously-identified
potential error factors versus the method process. This paper reports our on-going work and discusses our findings to
date.
During his explorations of Africa, David Livingstone kept a diary and wrote letters about his experiences. Near the end
of his travels, he ran out of paper and ink and began recording his thoughts on leftover newspaper with ink made from
local seeds. These writings suffer from fading, from interference with the printed text and from bleed through of the
handwriting on the other side of the paper, making them hard to read. New image processing techniques have been
developed to deal with these papers to make Livingstone's handwriting available to the scholars to read.
A scan of the David Livingstone's papers was made using a twelve-wavelength, multispectral imaging system. The
wavelengths ranged from the ultraviolet to the near infrared. In these wavelengths, the three different types of writing
behave differently, making them distinguishable from each other. So far, three methods have been used to recover
Livingstone's handwriting. These include pseudocolor (to make the different writings distinguishable), spectral band
ratios (to remove text that does not change), and principal components analysis (to separate the different writings). In
initial trials, these techniques have been able to lift handwriting off printed text and have suppressed handwriting that has
bled through from the other side of the paper.
Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying
the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that
boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its
insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The
insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine
(CMM).
Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended
to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary,
permitting further fractional area calculations to proceed.
The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the
intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a
common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a
polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame
transformation. These error sources can influence calculated surface areas. We describe experiments to assess error
magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities.
Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.
The Library of Congress' Preservation Research and Testing Division has established an advanced preservation studies
scientific program for research and analysis of the diverse range of cultural heritage objects in its collection. Using this
system, the Library is currently developing specialized integrated research methodologies for extending preservation
analytical capacities through non-destructive hyperspectral imaging of cultural objects. The research program has
revealed key information to support preservation specialists, scholars and other institutions. The approach requires close
and ongoing collaboration between a range of scientific and cultural heritage personnel - imaging and preservation
scientists, art historians, curators, conservators and technology analysts. A research project of the Pierre L'Enfant Plan of
Washington DC, 1791 had been undertaken to implement and advance the image analysis capabilities of the imaging
system. Innovative imaging options and analysis techniques allow greater processing and analysis capacities to establish
the imaging technique as the first initial non-invasive analysis and documentation step in all cultural heritage analyses.
Mapping spectral responses, organic and inorganic data, topography semi-microscopic imaging, and creating full
spectrum images have greatly extended this capacity from a simple image capture technique. Linking hyperspectral data
with other non-destructive analyses has further enhanced the research potential of this image analysis technique.
The Archimedes Palimpsest imaging team has developed a spectral imaging system and associated processing
techniques for general use with palimpsests and other artifacts. It includes an illumination system of light-emitting
diodes (LEDs) in 13 narrow bands from the near ultraviolet through the near infrared (▵λ≤ 40nm), blue and infrared
LEDs at raking angles, high-resolution monochrome and color sensors, a variety of image collection techniques
(including spectral imaging of emitted fluorescence), standard metadata records, and image processing algorithms,
including pseudocolor color renderings and principal component analysis (PCA). This paper addresses the development
and optimization of these techniques for the study of parchment palimpsests and the adaptation of these techniques to
allow flexibility for new technologies and processing capabilities. The system has proven useful for extracting text from
several palimpsests, including all original manuscripts in the Archimedes Palimpsest, the undertext in a privately owned
9th-century Syriac palimpsest, and in a survey of selected palimpsested leaves at St. Catherine's Monastery in Egypt. In
addition, the system is being used at the U.S. Library of Congress for spectral imaging of historical manuscripts and
other documents.
Imaging techniques have been developed to better account for fluorescent emission that may accompany reflected light during capture and processing of high color-fidelity imagery from art works and important historical documents. This approach is based on sequential capture of monochrome images of the object or
scene, each illuminated by a narrow spectral band from a bank of light-emitting diodes (LED's) in the ultraviolet, visible, and near-infrared spectral regions. These images contain color reference materials in the field of view, and are augmented by images in which bandpass filters are placed in the capture path. Processing of these images allows the separate contributions of reflectance and fluorescence emission in narrow wavelength bands to be recognized and quantified, which allows accounting for any fluorescence contributions during nominal reflectance imaging so that adjustments may be made in subsequent rendering. This paper describes the apparatus, capture procedures, and processing techniques that are employed. The impact of fluorescence on color fidelity during reproduction, based on deltaE calculations, is calculated and discussed for highly fluorescent pastels.
A spectral imaging system comprising a 39-Mpixel monochrome camera, LED-based narrowband illumination, and acquisition/control software has been designed for investigations of cultural heritage objects. Notable attributes of this system, referred to as EurekaVision, include: streamlined workflow, flexibility, provision of well-structured data and metadata for downstream processing, and illumination that is safer for the artifacts. The system design builds upon experience gained while imaging the Archimedes Palimpsest and has been used in studies of a number of important objects in the LOC collection. This paper describes practical issues that were considered by EurekaVision to address key research questions for the study of fragile and unique cultural objects over a range of spectral bands. The system is intended to capture important digital records for access by researchers, professionals, and the public. The system was first used for spectral imaging of the 1507 world map by Martin Waldseemueller, the first printed map to reference "America." It was also used to image sections of the Carta Marina 1516 map by the same cartographer for comparative purposes. An updated version of the system is now being utilized by the Preservation Research and Testing Division of the Library of Congress.
We have investigated the use of short pulses of infrared ((lambda) equals 2.09 micrometers ) light from a Ho:YAG laser to photofragment occlusions and restore flow in ventricular shunts, which provide the sole means of maintaining proper intracranial pressure in hydrocephalus patients. These experiments employed model tissues, a polymeric model compound, and patient explants in order to determine appropriate pulse energies and delivery rates for removal of occlusions material. Laser energy doses and rates of occlusion removal were established for these materials. Laser energy doses that do not damage the shunt device or surrounding tissue were identified. Optical fibers (25 ga. or smaller) can be introduced through the dome of current shunt devices and threaded to the occlusion site. Clinical application will require the continued development of an introducer tool for the transcutaneous insertion of the optical fiber into the shunt device and irrigation techniques for removing the occlusion detritus generated by photofragmentation treatment. Using this approach, a minimally invasive and benign procedure for in situ restoration of flow in occluded neurological implant devices becomes possible.
Laser surgery of soft tissue can exploit the power of brief, intense pulses of light to cause localized disruption of tissue with minimal effect upon surrounding tissue. In particular, studies of Ho:YAG laser surgery have shown that the effects of cavitation upon tissues and bone depend upon the physical composition of structures in the vicinity of the surgical site. For photofragmentation of occluding structures within catheters and other implant devices, it is possible to exploit the particular geometry of the catheter to amplify the effects of photofragmentation beyond those seen in bulk tissue. A Ho:YAG laser was used to photofragment occlusive material (tissue and tissue analogs) contained in glass capillary tubing and catheter tubing of the kind used in ventricular shunt implants for the management of hydrocephalus. Occluded catheters obtained from patient explants were also employed. Selection of operational parameters used in photoablation and photofragmentation of soft tissue must consider the physical composition and geometry of the treatment site. In the present case, containment of the soft tissue within relatively inelastic catheters dramatically alters the extent of photofragmentation relative to bulk (unconstrained) material. Our results indicate that the disruptive effect of cavitation bubbles is increased in catheters, due to the rapid displacement of material by cavitation bubbles comparable in size to the inner diameter of the catheter. The cylindrical geometry of the catheter lumen may additionally influence the propagation of acoustic shock waves that result from the collapse of the condensing cavitation bubbles.
The widespread and increasing use of mammographic screening for early breast cancer detection is placing a significant strain on clinical radiologists. Large numbers of radiographic films have to be visually interpreted in fine detail to determine the subtle hallmarks of cancer that may be present. We developed an algorithm for detecting microcalcification clusters, the most common and useful signs of early, potentially curable breast cancer. We describe this algorithm, which utilizes contour map representations of digitized mammographic films, and discuss its benefits in overcoming difficulties often encountered in algorithmic approaches to radiographic image processing. We present experimental analyses of mammographic films employing this contour-based algorithm and discuss practical issues relevant to its use in an automated film interpretation instrument.
In order to search for symbolically encoded sequences of DNA base information, we have constructed an incoherent optical feature extraction system. This approach uses video display, spatial light modulation, and detection components in conjunction with microlenslet replicating optics, to expedite the recognition of symbol sequences based on their symmetry properties. Multichannel operation is achieved through the replication of input scenery, making possible a higher throughput rate than for single channel systems. A notable feature of our arrangement has been the exchanged positions of input scenery and the filter set. The conventional treatment has been to display the input scene on a monitor for projection onto a set of feature extraction vectors realized as amplitude modulated LCTV devices or lithographically prepared masks. We have chosen instead to provide the filter set as input to the system and to correspondingly place the sequence data in the filter plane of the system, relying on the commutativity of projection to allow this role reversal. A class of DNA sequences known as palindromes are known to have special regulatory functions in biological systems; this class is distinguished by the antisymmetric arrangement of bases in palindromic sequences. We have designed our optical feature extractor to classify short (6 bases in length) sequences of DNA as palindrome or nonpalindrome. We note that this classification is made on the basis of the sequence symmetry, independent of base composition. We discuss the design of this architecture and the considerations that led us to the sequence representation. Initial results of this work are presented. Finally, the integration of this optical architecture into a complete system is discussed.
DNA, the molecule containing the genetic code of an organism, is a linear chain of subunits. It is
the sequence of subunits, of which there are four kinds, that constitutes the unique blueprint of an
individual. This sequence is the focus of a large number of analyses performed by an army of
geneticists, biologists, and computer scientists. Most of these analyses entail searches for specific
subsequences within the larger set of sequence data. Thus, most analyses are essentially pattern
recognition or correlation tasks. Yet, there are special features to such analysis that influence the
strategy and methods of an optical pattern recognition approach. While the serial processing employed
in digital electronic computers remains the main engine of sequence analyses, there is no fundamental
reason that more efficient parallel methods cannot be used.
We describe an approach using optical pattern recognition (OPR) techniques based on matched
spatial filtering. This allows parallel comparison of large blocks of sequence data. In this study we
have simulated a Vander Lugt1 architecture implementing our approach. Searches for specific target
sequence strings within a block of DNA sequence from the Co/El plasmid2 are performed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.