KEYWORDS: 3D modeling, Photomasks, 3D image processing, Extreme ultraviolet, Data modeling, Lithography, Extreme ultraviolet lithography, Process modeling, Computer programming, 3D acquisition
Background: As extreme ultraviolet lithography (EUV) lithography has progressed toward feature dimensions smaller than the wavelength, electromagnetic field (EMF) solvers have become indispensable for EUV simulations. Although numerous approximations such as the Kirchhoff method and compact mask models exist, computationally heavy EMF simulations have been largely the sole viable method of accurately representing the process variations dictated by mask topography effects in EUV lithography.
Aim: Accurately modeling EUV lithographic imaging using deep learning while taking into account 3D mask effects and EUV process variations, to surpass the computational bottleneck posed by EMF simulations.
Approach: Train an efficient generative network model on 2D and 3D model aerial images of a variety of mask layouts in a manner that highlights the discrepancies and non-linearities caused by the mask topography.
Results: The trained model is capable of predicting 3D mask model aerial images from a given 2D model aerial image for varied mask layout patterns. Moreover, the model accurately predicts the EUV process variations as dictated by the mask topography effects.
Conclusions: The utilization of such deep learning frameworks to supplement or ultimately substitute rigorous EMF simulations unlocks possibilities of more efficient process optimizations and advancements in EUV lithography.
We implement a data efficient approach to train a conditional generative adversarial network (cGAN)
to predict 3D mask model aerial images, which involves providing the cGAN with approximated 2D mask model images as inputs and 3D mask model images as outputs. This approach takes advantage of the similarity between the images obtained from both computation models and the computational efficiency of the 2D mask model simulations, which allows the network to train on a reduced amount of training data compared to approaches previously implemented to accurately predict the 3D mask model images. We further demonstrate that the proposed method provides an accuracy improvement over training the network with the mask pattern layouts as inputs.
Previous studies have shown that such cGAN architecture is proficient for generalized and complex image-to-image translation tasks. In this work, we demonstrate that adjustments to the weighing of the generator and discriminator losses can significantly improve the accuracy of the network from a lithographic standpoint Our initial tests indicate that only training the generator part of the cGAN can be beneficial to the accuracy while further reducing computational overhead. The accuracy of the network-generated 3D mask model images is demonstrated with low errors of typical lithographic process metrics, such as the critical dimensions and local contrast. The networks predictions also yield substantially reduced the errors compared to the 2D mask model while being on the same level of low computational demands.
The appearance of defects on the photomask is a key challenge in lithographic printing. Printable defects affect the lithographic process by causing errors in both the phase and magnitude of the light and of the sizes and location of the printed features. Presently 193 nm optical inspection tools are still the main ones for detecting pattern defects on EUV masks.1 However, pattern sizes on EUV masks could not be detected due to the resolution limit of 193 nm inspection tools. We propose and investigate the application of Convolutional Neural Networks (CNNs) to characterize and classify defects on lithographic masks. This paper details the training and evaluation of the CNNs to classify defects in simulated aerial images of an EUV setting. The simulation software Dr.LiTHO is used to simulate aerial images of defect-free masks and of masks with different types and locations of defects. Specifically we compute images of regular arrays of squares to be imaged with typical settings of EUV lithography (λ = 13.5 nm, NA= 0.33). We consider five types of absorber defects (extrusion, intrusion, oversize, undersize and center spot). The architecture of the CNN contains 4 convolutional layers (conv. layers) with a mixed size of filter,(3x3) and (5x5). The convolution stride and the spatial padding is 1 pixel for all conv. layers. Spatial pooling is carried out by 4 max-pooling layers. Two separate networks are trained for detection of the defect types and location, whereas a third algorithm combines the results. When an image is presented to the implemented algorithm and trained networks, it will return the defect type with its location. An accuracy of 99.9% on the training set and 99.3% on the test set is achieved for detection of the defect type. The network trained for location detection results in 98.7% training accuracy and 98.0% for the test set. Having a sufficient amount of training images the trained CNNs classify the types of defects and their location in the aerial image with high accuracy. The proposed method can be also applied to other defect types and simulation settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.