Numerous recent approaches have proven the efficacy of deep learning as a fast and efficient surrogate for various
lithography simulation use cases. However, a drawback of such approaches is the requirement of large amounts
of data, which is often difficult to obtain at advanced nodes. Different means to alleviate the data demands of
deep learning models have been devised, such as transfer learning from different technology nodes and active
data selection. Active data selection techniques tend to require large amounts of data to optimize and select.
In our previous work, we devised a more efficient implementation of transfer learning and detailed numerous
applications for EUV lithography. In this work, we expand on the data efficiency enhancements with domain
knowledge-based data selection and the use of alternative data generated by different modeling approaches.
The potential of deep learning as a supplement for image simulations to allow for more efficient modeling of new lithographic configurations has been explored in recent years. A routine challenge with deep learning solutions is their inherent data inefficiency. This work details a deep learning model capable of predicting aerial images for different mask absorbers and illumination settings. We expand on this model by investigating its accuracy potential and data efficiency. This investigation provides insights into the amounts of training data required to achieve the optimum accuracy for different absorber stacks. A significant variance in data requirements and achievable accuracy across different absorbers is observed. The observed trends indicate that the amount of training data required to train the model is directly correlated to the severity of the mask-3D effects of the absorber. This work presents a method that can improve the data efficiency of this predictive model without compromising the accuracy for novel absorbers or new lithographic configurations.
KEYWORDS: 3D modeling, Photomasks, 3D image processing, Extreme ultraviolet, Data modeling, Lithography, Extreme ultraviolet lithography, Process modeling, Computer programming, 3D acquisition
Background: As extreme ultraviolet lithography (EUV) lithography has progressed toward feature dimensions smaller than the wavelength, electromagnetic field (EMF) solvers have become indispensable for EUV simulations. Although numerous approximations such as the Kirchhoff method and compact mask models exist, computationally heavy EMF simulations have been largely the sole viable method of accurately representing the process variations dictated by mask topography effects in EUV lithography.
Aim: Accurately modeling EUV lithographic imaging using deep learning while taking into account 3D mask effects and EUV process variations, to surpass the computational bottleneck posed by EMF simulations.
Approach: Train an efficient generative network model on 2D and 3D model aerial images of a variety of mask layouts in a manner that highlights the discrepancies and non-linearities caused by the mask topography.
Results: The trained model is capable of predicting 3D mask model aerial images from a given 2D model aerial image for varied mask layout patterns. Moreover, the model accurately predicts the EUV process variations as dictated by the mask topography effects.
Conclusions: The utilization of such deep learning frameworks to supplement or ultimately substitute rigorous EMF simulations unlocks possibilities of more efficient process optimizations and advancements in EUV lithography.
We implement a data efficient approach to train a conditional generative adversarial network (cGAN)
to predict 3D mask model aerial images, which involves providing the cGAN with approximated 2D mask model images as inputs and 3D mask model images as outputs. This approach takes advantage of the similarity between the images obtained from both computation models and the computational efficiency of the 2D mask model simulations, which allows the network to train on a reduced amount of training data compared to approaches previously implemented to accurately predict the 3D mask model images. We further demonstrate that the proposed method provides an accuracy improvement over training the network with the mask pattern layouts as inputs.
Previous studies have shown that such cGAN architecture is proficient for generalized and complex image-to-image translation tasks. In this work, we demonstrate that adjustments to the weighing of the generator and discriminator losses can significantly improve the accuracy of the network from a lithographic standpoint Our initial tests indicate that only training the generator part of the cGAN can be beneficial to the accuracy while further reducing computational overhead. The accuracy of the network-generated 3D mask model images is demonstrated with low errors of typical lithographic process metrics, such as the critical dimensions and local contrast. The networks predictions also yield substantially reduced the errors compared to the 2D mask model while being on the same level of low computational demands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.