PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12624, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For Augmented Reality to spread and reach common usage levels, the visual experience needs to be entirely healthy and natural for the user. This is why CREAL has developed a unique AR display that combines light field imagery with ordinary highly transparent ophthalmic lenses, providing a true-to-life depth perception for the human eye and customizable prescription with classical aesthetic look of the lenses. While almost all AR glasses providers are lumbered with trade-offs between focal depth, image resolution and lens aesthetic, CREAL’s newest display finally enables an AR visual experience that “has it all”. This talk with introduce the solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The optics of the human eye is as simple as robust and well adapted to the requirements of the visual system. How different
induced optical profiles affect vision is a critical issue in ophthalmology. A very insightful way to evaluate this involves the use
of adaptive optics visual simulators. This type of instruments are available in laboratories and clinics in a desktop format. It allows
to measure the optics of the eye using a wavefront sensor and to add any particular phase profile with spatial light modulators
while performing visual tasks. In the last years, we have been advancing this concept to be integrated in wearable devices allowing to
test vision in subjects under more natural conditions. In this presentation, I will revise the history of adaptive optics in the eye with special
emphasis in the current efforts to develop wearable devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two technologies were combined to demonstrate a compact, foveated occlusive Mixed Reality (MR) headset. Waveguide displays were used to create the central, high-resolution Field of View (FOV), and a Heterogeneous Multi-Lens Array (HMLA) based display formed the periphery. A HoloLens 2, employing transparent waveguide displays, was the foveated display, covering a horizontal FOV of 43◦ with a resolution of 47 Pixels Per Degree (ppd). Each peripheral display used a custom-made HMLA and an off-the-shelf OLED microdisplay, with each lens of the HMLA array acting as a small VR display. Collectively, the array lenses tiled both the eye box and FOV to create a non-rectangular FOV of 26◦ × 26◦ and a large eye box with a resolution of 5 pixels per degree (ppd). Since the waveguide headset has see-through optics, the two peripheral displays were attached in front of its visor, so the two images were merged. The peripheral display had a one-degree overlap with the foveated display, making the total FOV of the hybrid headset 93◦. The system’s optics were less than 5mm thick, though the experimental setup was thicker due to optomechanical and industrial design constraints. The software of the peripheral display was integrated with the central display, making it a cohesive experience. The latency between the (faster) Central and the (slower) peripheral displays were compensated by using predictive algorithms for the head movement. A qualitative user study at the end of the project verified that the experience was improved and showed that the neck strain was significantly reduced and comfort increased.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel display solution for Metaverse AR glasses;
Novel light engine solution for LCOS and Micro-LED display combine with nano-imprinted waveguide to achieve lightweight, slim, low cost, and high volume Manufactuing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we present a proof of concept Holographic Near Eye Display (HNED) that can, in principle, be very thin (few mm’s), have high resolution, enlarged eye box, provide wide Field of View (FOV), correct for the user’s prescription, and display 3D content, and thus avoid the Vergence-Accommodation Conflict (VAC). This optical architecture combines a Holographic Optical Element (HOE) waveguide, a Liquid Crystal over Silicon (LCoS) acting as a digital dynamic hologram, eye tracking, and Display Exit Pupil steering and switching. The waveguide expands the beam and acts as a front illuminator for the LCoS. So a small in-coupling element to the waveguide can illuminate the large area of the digital hologram. Several static multiplexed holograms were recorded within the in-coupling and out-coupling HOEs of the waveguide. The static in-coupling holograms did not contain any power, while the static out-coupling holograms had a lens function with around 35 mm focal distance. In-coupling and out-coupling static holograms were matched in pairs and met the Bragg condition at the same horizontal angle. The digital hologram, i.e., the LCoS, was illuminated with the converging illumination emerging from the static out-coupling hologram. The light was focused at a different position in the eye box for each static multiplexed hologram. Thus, by knowing the position of the user’s eye, the angle beam incident on the in-coupling element was adjusted, and the correct out-coupling element directed the light into the user’s eye pupil. Steering in the vertical direction was achieved by utilizing the “Bragg Degeneracy”1 of the static holograms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The limited space-bandwidth product of digital holography results in a trade-off between the field of view (FOV) and eye motion box (EMB) size. One potential approach to overcome this trade-off is to use a waveguide as a pupil expander. However, this approach has the constraint of generating only infinite depth images or 2D images, which can cause visual discomfort. To address this issue, a novel method that enhances the space-bandwidth product while providing a 3D image with full depth range is necessary. In this paper, we introduce a projection-type holographic display that combines a waveguide, a spatial light modulator, and a laser light source to display true 3D holographic images with an extended FOV. Experimental results demonstrate that this method effectively generates holographic 3D images with an FOV expanded 4 times in the horizontal direction compared to conventional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wafer Scale Fabrication Techniques and Technologies
Diffractive optical elements (DOEs) gradually start replacing traditional refractive optics in many applications. The growing interest in DOEs is mainly because of their flexibility in light manipulation with a small form factor and their ability to combine simultaneously optical and computational functions into a single part by applying the software-hardware co-design approach. Two main methods are widely used to fabricate DOEs. The first method is the etched-based method that combines photolithography and reactive-ion etching (RIE). The second method is additive fabrication, which combines metal deposition and nanoimprinting (NIL). Both methods have many drawbacks. The RIE methods suffer from issues like lags in the etched depth when the feature sizes differ in the same pattern (RIE lags), high surface roughness, and aspect ratio-dependent etching rate. The second method could produce high-resolution micro-optics. However, the technique could suffer from poor adhesion of the patterns with the substrate and poor uniformity across large areas. Here we propose a new way to fabricate multi-level DOEs by directly growing an optically transparent material on a glass substrate. The method combines the deposition of Silicon dioxide (SiO2) by Plasma-enhanced chemical vapor deposition technique (PECVD) and bi-layer lift-off. We provide evidence of the effectiveness of the fabrication method by comparing a 16-level Fresnel lens fabricated by the RIE method with another lens fabricated by the proposed method. The characterization results show that with the proposed method, the surface roughness is lower, and the depth is uniform. Furthermore, the optical test shows a reduced haze effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Selective laser etching (SLE) enables highly precise 3-dimensional structuring of glasses with a resolution as low a few μm. Two main process steps are necessary for this technique. First, the previously created design is written inside the glass by using fs-laser radiation. Subsequently, the glass is placed in acid or a lye, to etch the laser-modified area. Hereby, the required substance for the post-processing step depends on the used glass. In our work, we investigated the structuring of fused silica with subsequent etching with KOH in detail. We studied the influence of different writing parameters such as laser power, repetition rate, polarization, stage movement speed and hatching distance towards an optimized surface roughness which is crucial for optical applications. The technology is not limited to the structuring of flat glass substrates, but applicable to fibers, waveguides or more complex 3D structures as well. Also hollow-core fibers have been processed to create an inlet and outlet for fluids and standard glass fibers were etched to realize free access to the fiber core, respectively. Especially the latter process enables a wide field of further applications if e.g. metal organic frameworks will be applied for sensing purposes or further optical structures printed on their surface by using twophoton polymerization processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We aim to integrate two-photon polymerization (TPP) in a fully digitalized and automated workflow for micro- and nanooptics production. This requires linking the coordinate system of the writing laser scanner to the geometry of the sample or previously fabricated structures on the sample. We developed a fast layer detection algorithm, which determines the axial position and the tilt angles of both interface planes of the photosensitive material using the integrated microscope camera of the TPP system. Furthermore, we identify the shear angles between the camera and the lateral scanner axes. The individual measurement results are stored and accessible via direct-written identification QR code on the sample itself. This is an important step towards an individualized automatic workflow based on digital twins.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thanks to clear advantages in picture quality, power efficiency, and compactness, OLED micro-displays have captured the market for "near eye displays," from consumer cameras, to professional cameras, to industrial and military applications such as night vision. OLED brightness and durability meets or exceeds market requirements, and growth in OLED micro-displays is very strong thanks to these and other advantages and benefits.
But what about OLED application in augmented reality ("AR")? Until recently, LEDs were often selected for AR products due to high brightness. However, OLED technology is evolving rapidly, and the primary challenge with AR is not necessarily brightness.
In this presentation, we will highlight the evolution of OLED technology, and how OLED can directly contribute to breakthroughs in the design and development of AR devices at the system level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As flat optics become increasingly mainstream, there is high interest in improving patterning resolution, making new materials available, and lowering manufacturing costs for these components. Because of their maturity across the entire supply chain, 300 mm m offer one of the best opportunities to simultaneously achieve all of these goals. By leveraging existing tooling and knowledge from 300 mm CMOS patterning, the high-pattern resolution of immersion DUV mastering can be combined with nanoimprint lithography and etching to achieve pattern transfer to optical materials. Wafer scale CMOS metrology can also be leveraged to optimize process uniformity and repeatability. This talk will present imec’s recent developments in utilizing CMOS fab tools to pattern high-index dielectrics on 300 mm substrates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical scanning approach to image processing has advantages over frequency-plane architecture in terms of accuracy and flexibility. We will review an optical scanning approach to image processing that includes the capability of holographic recording.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Big Data era, holographic data storage has become a good candidate recording technology, because of there are not only large storage capacities, but also high transfer rates. However, the realized capacity of it has a big gap to the theory. Polarization holography, a newly researched field, with the extraordinary capabilities in modulating the amplitude, phase, and polarization of light have resulted in several new applications, such as holographic storage technology, multichannel polarization multiplexing, vector beams, and optical functional devices. In this paper, the fundamental research on polarization holography with linear polarized light, a component of the theory of polarization holography, has been introduced. The polarization modulation realized using these polarization characteristics exhibits unusual functionalities, rendering polarization holography as an attractive research topic in a novel method for increasing the capacity of holographic data storage has been provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holographic Optical Elements (HOEs) have emerged as a pivotal technology in enhancing holographic Augmented Reality (AR) display systems. This paper presents an innovative approach that utilizes an off-axis arrangement with an HOE to secure a wide field of view up to 55°, making significant strides in volumetric reduction compared to conventional 4f filtering systems. However, a challenge arises from Bragg mismatch in the HOE, which creates aberrations. Our work proposes a method for compensating these aberrations on a voxel-by-voxel basis, substantially improving the quality of the holographic display. Limitations such as the 2mm maximum size of the eye box due to the diffraction limit of the spatial light modulator (SLM) are acknowledged, but we suggest potential solutions such as using the HOE substrate glass as a waveguide and incorporating an array of lenses with an eye tracker for pupil tracking. Our findings offer significant contributions to the holographic display technology landscape and suggest promising avenues for future research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cardiomyocytes form an electrically coupled syncytium which is the basis for the spatiotemporally synchronized propagation of macroscopic action potential wavefronts. Dysfunctional signal propagation patterns are a main cause of deadly tachycardia and are not yet fully understood. Optogenetics is a versatile toolset for the functional investigation of excitable cells and well-suited for the investigation of excitation wavefronts. We present a two-wavelength system using computer-generated holograms for the simultaneous stimulation and inhibition of induced stem-cell-derived human cardiomyocytes genetically sensitized to light, providing non-destructive models of myocardial scarring in vitro. The system is based on two beam paths, each comprising a binary ferroelectric SLM with frame rates reaching 1.7 kHz in a Fourier hologram configuration. To achieve near diffraction-limited spatial resolution, system-inherent aberrations are corrected digitally by superposing the light-pattern-generating holograms with sets of Zernike polynomials determined by an iterative optimization procedure. Thus, multiple foci or complex illumination patterns can be positioned three-dimensionally to illuminate multiple locations simultaneously to create defined excitation wavefronts. We show investigations on myocardial excitation control using different opsins like ChR2, ChRimson and BiPoles. This paves the way for future optogenetic heart rhythm control and the modeling of arrythmia induced by myocardial fibrosis using cardiac organoids in vitro.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spatial light modulator is an essential electro-optical device in applications that require to modulate the amplitude, phase, or polarization of light. Its proper use requires to determine its Mueller matrix. In this work, we compare different methods to calculate the condition number of Mueller-Stokes polarimetry techniques, which defines their maximum relative error. Combining experimental and calculated values, we determined the most suitable method for calculating the condition number.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a technique of structured light 3D scanning based on Laser Beam Scanner (LBS) using a bi-resonant vacuum-packaged MEMS mirror and a low-cost imaging sensor. A single infrared continuous-wave laser source, together with the MEMS scanner, generates dynamic Lissajous patterns delivering a continuous increase of scan precision over time. A collimated laser beam is directly coupled with the MEMS mirror without the need for diffractive optics. Lissajous patterns are captured by a low-cost CMOS imaging sensor having a short integration time while a bandpass filter ensures that only infrared signals are detected from the camera. Due to the non-repeatability of the generated Lissajous patterns, all pixels of the sensed environment are illuminated within few consecutive images. The spatial depth resolution is defined by the sensor resolution, enabling very high-density 3D scans due to the high imaging resolution achievable by CMOS sensors. Advanced image processing algorithms are implemented to accurately calculate pixel-wise depth on taken images. The hardware complexity of the proposed system is significantly reduced compared to other 3D sensing solution, which leads to a lower bill of materials. All components are mass-producible and easy to assemble, making the system advantageous for high volume markets and suitable for integration into consumer products. The proposed 3D scanning system can be integrated in AR and MR headsets for use-cases such as hand gesture scanning and indoor SLAM. Due to its high resolution capability, the presented 3D scanning technique can also be used for facial scanning and avatar generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The computed tomography imaging spectrometer (CTIS) is a snapshot capable hyperspectral camera. A diffractive optical element is used to create multiple projections of the hyperspectral data cube side by side on the image sensor. A reconstruction algorithm computes the hyperspectral image from the spatio-spectral multiplexed signal. It solves a similar problem as the reconstruction algorithms used for computed tomography scanners. We present how such a system can be realized by a parallelized approach. Several apertures are placed next to each other. Each aperture creates only one projection using a grating prism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Manufacturing processes of airbags products involve different and complex phases. An important one is the quality check using optical metrology methods both during the process and in its final phase. The aim of the present work is to study the advantages and limitations of utilizing a machine vision system for the above purposes. Thus, we report the development of a machine vision system for inter-phase and final quality check related to shape, dimensions, patterns, holes diameter, orientation, as well as the right sequence of processing airbag components. We utilized Basler/Cognex cameras with machine learning (ML)/artificial intelligence (AI) algorithms for inspection and CO2 lasers for cutting the components. Also, we developed (and further on optimized) in-house processes such as ultrasonic welding and dynamic positioning of components using cameras-guided robots. In order to validate the concept of the studies, several methods and tools have been utilized: virtual simulations programing tool (RobotStudio), CAD 3D mechanical design program (SolidWorks), Design of Experiments (DOE), Statistical Process Control (SPC), problem solving DMAIC (Define, Measure, Analyze, Improve, Control), as well as Process Failure Mode & Effects Analysis (PFMEA). The initial rejection rates (i.e., before applying these methods) were >30%, including false rejections. Overall, we reached an accuracy of 0.5 mm for dimensional measurements for textiles parts with different shapes in a field of view of 300 to 400 mm; rejection rate and falls rejects <1.3%; an appropriate stability and repeatability of the manufacturing process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Beam and image steering by Micro Electro Mechanical System (MEMS) Spatial Light Modulators decouples trade-offs between resolution, field of view, and size of displays and optics that are a common challenge found in optical designs. We overview solid state lidar and augmented reality display engine employing MEMS SLMs, Texas Instruments Digital Micromirror Device and Phase Light Modulators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need for free-form micro-optics (FFMO) is constantly growing in well-established business segments including flatpanel displays, solid-state illumination, thin-film solutions for security/anti-counterfeiting applications, AR/VR wearables, and automotive headlights. However, the high access barriers to pre-commercial production capabilities prevent companies, especially SMEs, from exploiting the FFMO technology in commercial products and hinder further innovation. To lower the barrier to access FMOA technology, CSEM and their partners have established the PHABULOuS Pilot Line. PHABULOuS offers a unique one-stop shop for all requests for prototyping and manufacturing of free-form microoptics services, from pilot to full-scale production. To mature the FMOA technology, the Pilot Line members have developed high precision origination techniques complemented by industry-fit, high-throughput up-scaling technologies for the cost-effective production of large-area FFMO. At the core of these technologies is Step & Repeat UV imprinting. The method has been successfully demonstrated in the PHABULOuS project for high precision upscaling of rigid small masters to flexible tools with 600 x 300 mm2 dimensions using a standard UV-NIL stepper modified for this purpose. Since there is currently no commercial Step & Repeat machine on the market able to replicate free-form micro-structures on large area with the required precision, CSEM has developed a high precision S&R UV-replication platform designed specifically to this purpose. Combined with the expertise in design and optical simulation, origination, and electroforming, the newly developed Step&Repeat capabilities at CSEM will strengthen the PHABULOuS Pilot Line offerings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a scalable and energy-saving technology for monolithic polymer components such as aspherical lenses. UVreplication is well known from wafer level optics, where supporting glass wafers remain in the final lens severely limiting the degrees of freedom of the optical design. In addition, material shrinkage when the curing the polymer limits reasonable sag heights of the lenses, so that only low-resolution imaging optics are possible. In our approach, the glass substrate in the individual lenses is omitted and a compensation of the shrinkage is achieved with minimum form error. This enables sag heights and aspherical lens profiles on both sides of thin menisci as required in high-resolution imaging optics so far realized by injection molding only. In contrast to injection molding, the replication is carried out at room temperature which saves energy and leads to a more eco-friendly production. In addition, low-cost materials for molds and lens masters can be used so that demonstrators, prototypes, small series and products in medium volumes of complex imaging systems can now be addressed economically for the first time. These smaller manufacturing volumes were so far limited to spherical glass lenses with their known drawbacks with respect to complexity and miniaturization compared to aspheres. In addition, the used materials are compliant to high temperature requirements even surviving reflow-soldering performed at 260°C. We present details of our new technology at the example of realized demo systems towards highvolume, low-cost and high-performance lens stacks ultimately targeting mobile imaging applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital Optics, especially in the form of flat wafer level optics (as with diffractive optics, computer generated holograms, holographic optical elements, MEMS and MOEMS, metasurfaces,...) are key building blocks to achieve small form factor display, imaging and sensing systems. However, flat optics do not necessarily lead to flat optical systems. Moreover, alignment of optical elements within a system can often be more costly than the optics themselves, leading to lower systems yields or to systems prone to misalignment in the field. We review here various optical architectures using wafer scale digital optics along with specific fabrication technologies that eventually lead to flat optical systems with high yields and very tight alignment features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High power VCSEL systems are a versatile and powerful tool for thermal treatment in industrial production where they enable a very homogeneous and locally controllable irradiance distribution at small working distances. Due to the inherent divergence of VCSELs, both characteristics degrade with increasing working distance. Depending on the size of the used VCSEL system, already at distances of about 100 mm the irradiation is not homogenous anymore and the local controllability is strongly limited already at even smaller distances. To extend the application range of VCSEL systems for increased working distances while maintaining homogeneity and local controllability, two multi-aperture beam integrators have been designed. Simulation results as well as measurements of a prototype system are presented in this work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical systems in lithography machines play a significant role in their performance and, therefore, need to be optimized for specific applications. Artificial Intelligence (AI) and, in particular, metaheuristics are utilized in optimization algorithms for finding a diverse set of feasible and high-performing designs. The diversity requirement of the produced solutions is enforced to allow taking into account additional constraints that are difficult to formalize. In this work, we analyse the space of solutions previously produced by a niching evolutionary algorithm for the Cooke Triplet optical system and propose an approximation of the manifold where all high-performing designs exist. First, we show the existence of high-performing optical designs that are structurally different from the 21 previously known theoretical solutions. In order to do this, we develop a general computationally efficient methodology to create a partition of known high-quality points and their (accidentally found) improvements to their corresponding attraction basins, in the case when neither the exact number of landscape attractors nor their locations are known. We construct a manifold estimation which contains high-performing solutions by fitting a Gaussian Process-based classifier which predicts if an arbitrary design is close to high-performing. The proposed approach shows that AI-assisted optimization is beneficial, and it can be used to extend the capabilities of lithographic scanners and metrology equipment. Furthermore, it opens the possibility of studying other industrial applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce an innovative concept for 3D imaging that utilizes a structured light principle. While our design is specifically tailored for collaborative scenarios involving mobile transport robots, it is also applicable to similar contexts. Our system pairs a standard camera with a projector that employs a diffractive optical element (DOE) and a collimated laser beam to generate a coded light pattern. This allows a three-dimensional measurement of objects from a single camera shot. The main objective of the 3D-sensor is to facilitate the development of automatic, dynamic and adaptive logistics processes capable of managing diverse and unpredictable events. The key novelty of our proposed system for triangulation-based 3D reconstruction is the unique coding of the light pattern, ensuring robust and efficient 3D data generation, even within challenging environments such as industrial settings. Our pattern relies on a perfect submap, a matrix featuring pseudorandomly distributed dots, where each submatrix of a fixed size is distinct from the others. Based on the size of the working space and known geometrical parameters of the optical components, we establish vital design constraints like minimum pattern size, uniqueness window size, and minimum Hamming distance for the design of an optimal pattern. We empirically examine the impact of these pattern constraints on the quality of the 3D data and compare our proposed encoding with some single-shot patterns found in existing literature. Additionally, we provide detailed explanations on how we addressed several challenges during the fabrication of the DOE, which are crucial in determining the usability of the application. These challenges include reducing the 0th diffraction order, accommodating a large horizontal field of view, achieving high point density, and managing a large number of points. Lastly, we propose a real-time processing pipeline that transforms an image of the captured dot pattern into a high-resolution 3D point cloud using a computationally efficient pattern decoding methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional approaches to satisfying optical performance requirements within application constraints leverage the degrees of freedom (DOF) provided by multiple surface shapes distributed among lenses composed of materials with different dispersion properties. Using homogeneous-index lens materials, the radius, conic constant, or higher order aspheric surfaces provide the DOF necessary to provide optical power and address certain geometric aberrations, while using lens materials with different dispersion properties provides the DOF necessary to compensate for chromatic aberrations. Nevertheless, to achieve the DOF necessary to achieve the performance required of compact head-mounted and mobile camera imaging systems, lens stacks must be composed of a significant number of complex shaped optical elements, resulting in undesirable size, weight, and cost. Recent advancements in inkjet print manufacture of three-dimensional (3D) freeform gradient-index (GRIN) optical materials have increased available design DOF by integrating volumetric index gradients within the bulk of the optical material. The 3D index gradients achieve optical power while reducing aberrations. Moreover, when the refractive index spectra of the optical feedstock are precisely formulated, dispersion may be independently controlled within the lens. The 3D GRIN optical materials may be machined and polished with aspheric or freeform surface shapes that further increase the DOF and provide lens designer unprecedented flexibility in lens designs with reduced lens counts. The performance benefits and applications of 3D GRIN optical elements are demonstrated, showcasing their potential to reduce size, weight, and cost in optical systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel optical system for image processing using geometric-phase lenses, or Pancharatnam-Berry lenses. These are patterned half-wave plates, where the optical axis follows a quadratic relation with the radial coordinate. They are planar lenses that offer high diffraction efficiency and polarization selectivity, and present convergent and divergent focalization for left and right circularly polarized input light. This polarization selectivity introduces a new degree of freedom for optical imaging. Different simple band-pass filtering experiments for binary objects are presented. As a new optical device, the unique performance of such multifunctional geometric-phase lenses offers potential applications in optical imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holographic optical engine composed of a spatial light modulator (SLM), imagers, relay optics, and a control computer offers an easy installation and easy use for industrial uses of the laser processing with beam shaping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holographic components are nowadays not only applied in optical systems for the chromatical correction combined with lenses and mirrors but also widely applied in projection, detection, lighting and filtering. It plays an important role in Augmented Reality (AR) and Virtual Reality (VR) projection systems in order to achieve high imaging performance within a very compact volume. For AR Head-up Displays (HUDs), due to the increase of the field-of-view and the display distances, it is getting more and more challenging to realize the good imaging performance within a small volume. We will introduce an innovative technology from ZEISS Microoptics, which differs from other already existing diffractive or holographic solutions. This technology is based on multilayer holographic structure. With the smart arrangement of the holograms and the complex diffraction of the light, it is possible to offer large degrees of freedom to improve the resolution and brightness of the AR-HUDs and tremendously reduce the system size. The advanced multilayer holographic technology provides large flexibilities of the shape, the size, and the position. The layers are also defined to increase the achromatism and color performance of the AR-HUDs. This work of ZEISS Microoptics provides a break-through solution, which overcomes the disadvantages of mirrors, lenses and the single layer holographic components. It provides also an insight for optical systems not only in Automotive but also in other smart functional glass technologies to realize the compact high-performance functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LIV curves are fundamental measurement of laser diodes to determine electrical and optical operating characteristics. LIV curves consist of L-I curves (optical intensity against current) and V-I curves (voltage against current). LIV curves determine power conversion efficiency, threshold current, slope efficiency, kinks, rollover point and more. They are widely used at various stages since it is critical to identify failed DUTs early in the manufacturing process. LIV curves are always measured for the DUTs with single emitter or for DUT as a whole when consisting of many emitters. Detailed and comprehensive LIV test, spectrum and beam analysis of each single emitter of an array is the focus of this study. We extend existing one-dimensional LIV test, spectral and beam analysis (including beam numerical aperture, M2 and beam waist) to each single emitter of the laser diode array at well-controlled conditions. Our experimental design consists of camera based radiant power and spectrum measurement. This approach allows parallelization of the measurements, which reduces overall measurement time, and investigation on the cross-talk between individual emitters. We analyzed electrical, optical and spectral differences within emitters of a VCSEL array and as well with the array as a whole. The accepted range of variation can be set in order to identify underperforming or out-of-specification single emitters. Therefore, defect or deficient laser diodes are detected at early stages of the manufacturing process, saving time and money. Such comprehensive characterization of individual emitters is crucial for demanding applications such as facial recognition, 3D sensing, in cabin sensing, LiDAR and ranging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present, to the best of our knowledge, the first compact, full-color, hybrid RGB LD-SLED light source module designed for near-to-eye display systems. This module integrates a blue and green semiconductor laser diode (LD) at a wavelength of 453 nm and 520 nm, respectively, and a red superluminescent diode (SLED) at 639 nm in combination with a novel micro-optical, free-space architecture. The light source module includes circularizing optics, wavelengthcombining filters, and a single aspheric collimation lens. The light source module has a compact footprint of 7.7 mm x 10.8 mm and generates collimated, circular and collinearly aligned RGB beams with low divergence and large diameters in the range of 1.7 mm to 2.2 mm at the optical output. The current generation of this light source module delivers up to 15 mW of optical power per color, with a total power dissipation value of only 430 mW.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Emissive OLED-on-silicon microdisplays have been considered being opaque only so far. However, modern and advanced silicon CMOS process nodes are increasingly made on silicon-on-insulator (SOI) substrates. By separating the SOI handle wafer from the buried oxide (BOX) layer (that has the active silicon on top) and applying space-cautious layout design of the CMOS active devices as well as wiring layers it is possible to achieve semitransparent, high-resolution CMOS backplanes for microdisplays. Similar to regular OLED-on-silicon the emissive frontplane becomes embedded by waferlevel OLED post-processing. Yet, depending on pixel density and array layout a microdisplay transparency of <20% can be achieved now. Consequently, the semi-transparent microdisplay becomes the optical combiner itself, eliminating the exit pupil expander (EPE), which drastically improves the optical efficiency from the light source into the eye box. Additionally, new high-brightness OLED achieving <35kcd/m² in monochrome, or 10kcd/m² in color versions, and their integration onto the OLED-on-SOI platform and an ultra-low power pixel cell backplane architecture (power consumption <10mW) pave the way for matching both form factor and battery life requirements in optical see-through NTE, enabling new optical concepts for augmented-reality (AR) devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although being a rapidly developing field, autonomous driving already starts facing challenges in terms of safety for the humans surrounding the vehicle, like pedestrians or cyclists. The communication between the vehicle and its close proximity could overcome such issues, e.g., by projecting patterns onto the street to inform about actions which will be taken by the vehicle: turn, stop... Such a projection system is required to yield a large projection pattern in order to cover the whole circumference of the vehicle with few projectors only, dynamic content to be adapted to the traffic situation, and a very high brightness for daylight situations. We propose a high-performance solution based on holographic projection. It utilizes the collimated beams of four laser diodes that are independently shaped by one reflective, high-definition LCOS Spatial Light Modulator. A compact optomechanical design includes telescope optics to enlarge the projected fields, which are stitched laterally to achieve a 0.3 x 1 m² projection separated less than 50 cm from the vehicle. Image stitching as well as distortion correction is applied in the hologram generation algorithm. In result, we achieve the projection of dynamic patterns (above 60 fps) that can surround the full perimeter of the vehicle with a brightness in the 1-10 klux range depending on the projected pattern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show in this paper a digital random target method for measurement of Modulation Transfer Function and Point spread function locally in all the field of view of the instrument, and with a good accuracy. The point is to use a “pixel locking method” to identify point in object space with its conjugate point in the image space in pixel accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Model predictive control (MPC) can use the state of the current measurement processing to predict future events and be able to take control processing accordingly. To implement MPC in our adaptive optics system (AOS), a multichannel state-space model is first identified between the driving voltage for a 61-channel deformable mirror (DM) as the input and the 8-order Zernike polynomial coefficients via a lab-made Shack-Hartmann wavefront sensor (SHWS) as the output. Conventionally, the center of weight algorithm is utilized to reconstruct the wavefront from SHWS, but it takes a lot of computation time. Therefore, a deep learning (DL) approach based on U-Net is adopted to rapid reconstruct the wavefront. The U-Net significantly reduces the time to compute the wavefront and also gets the higher accuracy. After that, the MPC controller based on the identified system model is implemented in AOS. Currently, the simulation results demonstrate that the MPC with the DL-SHWS can fast correct the wavefront aberration. Eventually, the MPC-based AOS will be implemented under Robot Operating System (ROS) to achieve real-time control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method to increase resolution in single-pixel imaging through parallel implementation based on the selfimaging effect is proposed. The scanning basis, Walsh-Hadamard patterns, are displayed in each unit cell of a 2D binary grating codified on a DMD. The self-imaging effect is used to project the sampling functions onto the object. Images with higher resolution can be obtained by using a light sensor with a low number of pixels and reconstructing by single-pixel technique. This approach can be useful to improve the resolution of IR or THz cameras. Preliminary results are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fixation stability is the ability of the eyes to maintain a constant and stable gaze on the fixation target. One of the visual functions that can be affected by unstable fixation is stereopsis. Stereovision is important for a variety of daily tasks, as well as for using stereoscopic displays in visualizations and entertainment, such as watching movies and playing video games. The interaction between fixational stability and stereopsis in different conditions has often been studied in children with amblyopia. The aim of our research is to explore the relationships between binocular fixation stability and stereopsis in school-aged children who do not have amblyopia and strabismus in their anamnesis. The children were divided into two groups: those with normal stereoacuity (≤ 60 arcsec in the TNO test) and those with reduced stereoacuity (≥120 arcsec in the TNO test). The fixation target was demonstrated on a computer screen, and eye movements during fixation were recorded using a Tobii Pro Fusion eye tracker operating at 250 Hz. The results demonstrate that children with better stereoacuity tend to have more stable fixation compared to children with reduced stereoacuity. However, the difference in fixation stability was not significant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The alignment of optical components is one of the fundamental tasks faced in the optical manufacturing industry. If the task is extended to miniaturized aspheric glass lenses, or multi-lens systems, the complexity is further increased. Wavefront sensors are commonly used in the precision alignment of optical components, and they provide an elegant and efficient tool for systems with high complexity. We have implemented an alignment simulation environment which is based on Zemax optics studio (ZOS) and related application interface (API). This tool is used to create a set of linear equations or a neural network which are used to solve the relation between wavefront and optical element alignment. In our first alignment approach, the optical system description is linearized by simulating a set of linear dependencies connecting the known perturbation values and corresponding aberrations in the resulting wavefront described as a set of Zernike coefficients. Based on the simulated perturbation and the resulting Zernike coefficients, the alignment correction action can be determined by solving the resulting set of linear equations in MATLAB. To get more equations to describe shallow dependencies on the on-axis solution, we added options for additional fields and an iterative solution which mimics the real alignment system by minimizing the perturbations through successive corrections in lens position estimation. As a non-linear alternative to computationally expensive equation solving approach, we have also evaluated AI-based neural networks for which the training data was automatically generated in the optical simulator via the ZOS-API. In preliminary evaluations the neural network approach has shown promising performance when compared to the approach with linearized equations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extended depth-of-focus (EDOF) diffractive lenses produce a narrow and long focusing region which have many applications on microscopy, laser focusing or contact and intraocular lenses. Several designs have been proposed to achieve EDOF by angularly modulating the focal length. Nevertheless, when the focus is highly elongated, some undesired intensity fluctuations are produced. To solve this problem, we have generalized this type of lenses by defining their focal length as a Fourier series. Moreover, we have used Particle Swarm Optimization algorithm to optimize its Fourier series coefficients and generate an enhanced design. The performance of our Fourier Series Diffractive Lens (FSDL) is parameterized through the Full Width at Half Maximum (FWHM) of the beam and the uniformity of the intensity along the optical axis. Our results prove that the FSDL beam on the focal region is narrower and much more uniform than previous EDOF designs, showing the enhancement of the proposed EDOF lens.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser scanners using Micro-Electro-Mechanical Systems (MEMS) with oscillatory mirrors are one of the most promising types of such devices. The aim of this work is to approach the implementation of various scan patterns using such MEMS scanners. Raster, spiral, and Lissajous scan patterns are programmed and generated, with different characteristic parameters. The starting point of the study is our approach on raster scanning using galvanometer-based scanners (GSs) to design optimal functions of such systems. In this respect, MEMS scanners are a particular case of GSs for which both oscillatory mirrors are placed in the same plane, thus improving the focusing of the device. A comparison is made between such scanning modalities: performances are evaluated based on the several criteria such as speed, resolution, field-of-view (FOV), fill factor (FF) and linearity of the generated scan patterns. Experimental validations are performed to determine the offset that is occurred in the actual implementation. Possible applications where this scanning technique may be preferred are pointed out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose the use of a simplified model for the analysis of the scattering elements used in edge-lit systems. By modelling their behaviour as lambertian light sources whose properties depend on the size and geometry of the scatterer and LGP, it is possible to simulate the illuminance map of the edge-lit structure using only 2D ray-traced simulation. This reduces the computational complexity in the optimisation process used to calculate the scatterers distribution to achieve maximum uniformity in light extraction. The results obtained by comparison between the proposed algorithm and a commercial software demonstrate the validity of the proposal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kilim plastics, a family business started in 1985, is an ISO certified company that specializes in converting metal parts into plastic injection molded parts, mostly made of engineering glass filled polymers. We supply a cross section of high tech, medical, aerospace, and defense companies in Israel (e.g. IAI, KLA, Orbortech, Elbit/ElOp). Over the past six years we have invested in creating a department in our company dedicated to optics by investing in diamond turning and inspection equipment to measure optic lens profiles. We are currently at a late stage of manufacture of a high-quality optical kit directed for the education field mostly made of injection molded engineering polymers. The project aim is to enable learning and working with optics in a price bracket for use on each student’s desk, rather than only a single set for an entire class, without reducing the quality. The optical kit includes a modular optical table, adjustable mount holders, lenses, mirrors, a beam splitter, and laser holders.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quality control is a critical aspect in the manufacturing of high-quality optical components. However, the traditional metrology techniques of sag meters and test plates are often performed as the final step in the polishing process, leading to an increased number of defective parts and increased production costs caused by the need for many test plates. In this paper, we present an innovative approach to measuring the radius of curvature and form error using an interferometer. The traditional method needs a pair of test plates for each curvature radius. This could cause very high costs in storage and maintenance for long-term use. In contrast, an interferometer can measure different radii of workpieces and is highly accurate. The measured data can be used in a closed-loop feedback system to compensate for form error and radius errors in CNC-controlled polishing machines. This innovative approach will significantly improve flexibility and reduce the costs of the polishing process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image inpaintnig in textile manufacturing is a new emerging research topic in preprocessing for jacquard CAD systems. One of the most important aspects of a jacquard CAD system is the simulation of the appearance of a jacquard texture during inference. Jacquard image inpainting has become an indispensable process for the Jacquard CAD system. Jacquard image reconstruction is designed to restore a damaged image with missing information, so it is necessary to determine which parts of the image need to be repaired. Thus, this task includes two processing stages: the detection of defects and their recovery. This article presents a two-stage approach that combines new and traditional algorithms for detecting defects and repairing damaged areas. The first stage is a defect detection method based on a convolutional autoencoder (U-Net). The second stage is image inpainting based on exemplar-based concepts and the anisotropic gradient. Our system quantitatively outperforms state-of-the-art methods regarding reconstruction accuracy in the benchmark.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article considers the construction of an algorithm for automated analysis of the position of objects in a limited space (field, workshop, warehouse) with the possibility of constructing a control action for moving the camera and gluing video data. The method of parallel data analysis allows, in addition to constructing the optimal movement trajectory, to integrate the video subsystem into the manipulator control system, which will also allow assessing the relative position of the operator, cargo and obstacles located in the manipulator service area. The paper presents the structure and description of the construction of a video subsystem consisting of several neuroaccelerators with RISC-V core controllers and a video camera system. The structure of control formation for a group of controllers, united in a single network with a central computer that transmits pre-processed data to the control system, is presented. The presented technique makes it possible to form a system including: controllers, video processing, a computer and a control system that allow real-time processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.