KEYWORDS: Photomasks, 3D modeling, Optical proximity correction, Data modeling, Process modeling, SRAF, Data processing, Computer simulations, Electromagnetism, Lithography
32 nm half-pitch node processes are rapidly approaching production development, but most tools for this process are
currently in early development. This development state means that significant data sets are not yet readily available for
OPC development. However, several printing effects are thought to become more prominent at the 32 nm half-pitch
node. One of the most significant effects is the three dimensional (3D) mask effect where the mask transmittance and
phase are impacted by the mask topography. For the 32nm node it is essential that this effect is correctly captured by the
OPC model. As wafer data for the 32nm half-pitch is difficult to obtain, the use of rigorous lithography process
simulation has proven to be invaluable in studying this effect. Using rigorous simulation, data for OPC model
development has been generated that allows the specific study of 3D mask effect calibration. This study began with
Kirchhoff based simulations of 32 nm node features which were calibrated into Hopkin's based OPC process models.
Once the standard Kirchhoff effects were working in the OPC model, 3D mask effects were included for the same data
by performing fully rigorous electromagnetic field (EMF) simulations on the mask. New EMF compensation
methodologies were developed to approximate 3D mask effects in a fast OPC process simulation. These methodologies
modify the phase and transmission of features to compensate for 3D mask effects in a fast OPC model. The OPC model
was then refit including the 3D mask effect and found to generate as much as 5 nm differences between the fit Kirchhoff
data and the fit 3D mask data. In addition, the Hopkin's based OPC model with new EMF compensation methodologies
has been able to fit the 3D mask data with an RMSE value of 0.52 nm and a range of 2.76 nm. These data were
compared to 32 nm half pitch node data from IMEC. In addition the data process models were used for OPC correction
with first principles validation to understand the impact of the 3D mask effect on OPC.
In its chapter "Modeling," the International Technology Roadmap for Semiconductor 2005 edition stipulates the need for "Multi-generation lithography system models." Most lithographers would share this opinion that even if the equipment needs constant refurbishing, the software should survive at least a couple of technology generations. Fortunately, the table in which the statement appeared in the ITRS roadmap was accurately entitled "difficult challenges." This article will shed some light on the process of progressive modeling while making clear that, in all likelihood, formidable challenges will remain. The very core of simulation is a physical / chemical model of the real world. Lithographers need a sound model for the next technology node, not a short-sighted one, despite the fact that this is very difficult to achieve. This paper will use the parable of the ichthyologist as a starting point for the problem. It will translate the parable into the "deep waters" of lithography, showcasing lithography simulation as it has evolved over the years. Finally, it will present a small, yet decisive, recent step toward predictive lithography simulation. This example will include an improvement in the model for the post-exposure bake of chemically amplified resists, as well as a non-comprehensive list of foreseeable challenges.
The paper proposes a method to mitigate the ever tighter requirements for mask CD uniformity. The basic idea is simple. As the mask error enhancement factor (MEEF) soars at low k1 values with pitches getting smaller it sould be possible to alleviate the problem given there is a way to increase the pitch. For highly repetitive layouts like cell fields of DRAMs the solution is rather straightforward. One has to find the next larger pitch in the layout and divide the layout into sublayers. Those sub-layers are written into separate reticles for subsequent exposure. In consequence, the method will lead to double exposure in case two reticles have been generated. The simplest example is an array of lines and spaces with equal pitch. An almost trivial example, a regular square contact array, results in two equal checkerboards. The diagonal (1,1) in the square array is the second smallest pitch in a square array. To fully cover a square array requires two checkerboards separated by the base vector (1,0) of the original lattice. These two checkerboards will subsequently be printed by a double exposure. It is obvious that the pitch can be increased by choosing larger displacements at the cost of more sub-layers. Doubling the pitch, which makes manufacturing of masks a lot easier, would require four reticles, hence four exposures. The edges and corners of regular arrays print significantly different as the MEEF is position dependent. It can be expected that increasing the pitch is also beneficial in the sense that it levels off the MEEF variance. It will be investigated how much the common process window increases. The applicability of said method is obvious for memory layouts. It can however, be extended to semi-periodic or even random layouts. Its value depends primarily on the density of the layout and on the k1 value. Its potential uses comes in only at the leading edge of lithography where the MEEF starts to become a real pain for the mask maker. Simulation results will be shown as well as calculations of the process latitude before and after dividing the layout.
One of the hot topics in the Extreme Ultra-Violet (EUV) mask fabrication process is the requirement to produce multilayer blanks without any printing defects. As the potential of experimental studies is still limited, a predictive simulation of EUV lithography is an important step on the way to meet this requirement. The simulator tool SOLID-EUV is extended to deal with defective multilayers. The simulation is divided into two regions, the finite-difference timendomain (FDTD) method for the absorber part and the simulation of the multilayer reflectivity by the Fresnel-method. To take the defects into account the multilayer is divided into segments, which include the defect and the reflectivity is calculated for each segment. For calculating the multilayer stack for each segment the defects are assumed to be Gaussian shaped. For the complete computation of the reflected light from the EUV mask a coupling of the two methods is realized. This paper presents case studies using the lithography simulator tool SOLID-EUV with the new defective multilayer simulation part, to analyze the printability of defects. The impact of the defect size, horizontal and vertical defect position within the multilayer, and the influence of the layer deposition process is analyzed. The most influential defect parameters are identified. One defect with an influence which tends to be printed is taken and combined with typical mask structures, such as isolated lines, lines and spaces and contact holes. The process windows of the mask structures for various defect positions are analyzed. These simulations can be used to develop strategies to handle such defects.
Lithographic exposures belong to the most critical process steps in the manufacturing of microelectronic circuits. Almost all exposures are performed over nonplanar wafers. The backscattering of light from topographic features on these wafers is of increasing concern for the accuracy and stability of lithographic processes. We combine standard imaging theory and finite-difference time-domain (FDTD) algorithms to simulate several typical geometries. Consequences with respect to typical lithographic process parameters are discussed.
Standard simulations of optical projection systems for lithography with scalar or vector methods of Fourier optics make the assumption that the wafer stack consists of homogeneous layers. We introduce a general scheme for the rigorous electromagnetic field (EMF) simulation of lithographic exposures over non-planar wafers. Rigorous EMF simulations are performed with the finite-difference time-domain (FDTD) method. The described method is used to simulate several typical scenarios for lithographic exposures over non-planar wafers. This includes the exposure of resist lines over a poly-Si line on the wafer with orthogonal orientation, the simulation of “classical” notch problems, and the simulation of lithographic exposures over wafers with defects.
As the opportunities for experimental studies are still limited, a predictive simulation os EUV lithography is very important for a better understanding of the technology. One of the most critical issues in modeling of EUV lithography is the description of the mask. Typical absorber heights in the range between 80 and 100nm are more than 5 times larger than the wavelength of the used EUV radiation. Therefore, it is virtually impossible to perform parameter studies for 3D EUV masks, such as arrays of contacts or posts, with nowadays standard computers by straightforward application of finite-difference time-domain (FDTD) algorithms, which are used for the rigorous electromagnetic field simulatin of optical masks. This paper discusses the application of field decompsition techniques for an efficient simulation of 3D EUV-masks with FDTD algorithms. Comparisons with full 3D simulations are used to evaluate the accuracy and the performance of the proposed approach. The application of the new QUASI 3D rigorous electromagnetic field simulation for EUV masks reduces memory requirements and computing time by a factor of at least 100. The implemented simulation appraohc is applied for a first exploration of mask induced imaging artifacts such as placement errors, telecentricity errors, Bossung asymmetries, and focus shifts for 3D EUV masks.
Optical lithography simulation plays a decisive role in the development of technology for the manufacturing process of semiconductor devices. Its role in reticle inspection has only recently gained more attention. Filters determining which defects need repair and which ones can be ignored help set up the filter classes in inspection systems. These calculations are performed offline. In an effort to increase the accuracy of inspection it would be desirable to place the decision level as close to the actual process as possible. Therefore, an inspection system based on aerial images is a step in this direction. In addition, an optical simulator calculates from the aerial image the resist image. To do so very fast resist image models are needed (see figure 7). Quick models so far were restricted by accuracy and speed. In this paper a new very fast model will be presented that allows calculation of large areas suitable for inspection purposes. Finally a 'virtual inspection' system will be presented pinpointing at weak spots in the layout. In an effort to calculate larger areas of the resist in less time we had to take completely new approaches. They led us to analytical descriptions of the image transfer into the resist. Within these descriptions we begin in this first paper to investigate an approach based on the propagation of a top aerial image into the resist. The aerial image may come from calculations, as in the present article, or as well from measurements. The purpose of this article is to demonstrate the performance of the Fast Resist Model with respect to accuracy and time consumption. The limits of the current model are equally described.
Data Preparation has become another challenge to the many existing ones in mask making. This was mainly brought about by the advent of OPC and PSM layouts. The amount of data, doubling every year, has experienced a quantum leap. The more aggressive optical proximity correction the greater the leap. Hierarchical data treatment is one of the most powerful means to keep memory and CPU consumption in reasonable ranges. Only recently, however, has this technique acquired more public attention. In this paper we will present means to quantitatively measure the degree of hierarchy. In addition to global numbers local numbers turn out to be extremely helpful. They may serve the purpose to treat different branches of a tree, e.g. of a memory layout, by different approaches. Several alternatives exist, which have, to date, not been thoroughly investigated. One is a bottom-up attempt to treat cells starting with the most elementary cells. The other one is a top-down approach which lends itself to creating a new hierarchy tree. A trivial approach, widely used so far, is to flatten the layout. Conditions will be shown for the alternatives to work most effectively.
Controlling the critical dimension is central in mask manufacturing, and with the ever-shrinking design rule - and hence the increasing requirements on the mask fidelity - new and visionary ways of pushing the envelope of the critical dimension (CD), becomes essential. Research tools and off-line solutions for sizing, proximity correction and other CD compensations have been pursued for some time, but making efficient use of such technologies have been limited by ease-of-use, fracturing and computational time and data volumes associated. Here, we present techniques to deal with these challenges by taking the approach to integrate the solutions into a modern, real-time pattern generator datapath. The solution is based on hierarchical treatment of the patterns in the real-time data path of the pattern generator. By placing it in the real-time domain, we avoid the problem with exploding stream data volumes, and can exploit the parallel architecture and raw computational power of the data path engine.
Electronic layouts are usually flattened on their path from the hierarchical source downstream to the wafer. Mask data preparation has certainly been identified as a severe bottleneck since long. Data volumes are not only doubling every year along the ITRS roadmap. With the advent of optical proximity correction and phase-shifting masks data volumes are escalating up to non-manageable heights. Hierarchical treatment is one of the most powerful means to keep memory and CPU consumption in reasonable ranges. Only recently, however, has this technique acquired more public attention. Mask data preparation is the most critical area calling for a sound infrastructure to reduce the handling problem. Gaining more and more attention though, are other applications such as large area simulation and manufacturing rule checking (MRC). They all would profit from a generic engine capable to efficiently treat hierarchical data. In this paper we will present a generic engine for hierarchical treatment which solves the major problem, steady transitions along cell borders. Several alternatives exist how to walk through the hierarchy tree. They have, to date, not been thoroughly investigated. One is a bottom-up attempt to treat cells starting with the most elementary cells. The other one is a top-down approach which lends itself to creating a new hierarchy tree. In addition, since the variety, degree of hierarchy and quality of layouts extends over a wide range a generic engine has to take intelligent decisions when exploding the hierarchy tree. Several applications will be shown, in particular how far the limits can be pushed with the current hierarchical engine.
Rigorous modeling of diffraction from the mask is one of the most critical points in the extension of lithography simulation from its traditional spectral range between 150 and 500 nm into the area of extreme ultraviolet (EUV) between 10 and 15 nm. A typical EUV mask is made of a reflective multilayer (Mo/Si or Mo/Be for example) deposited on a substrate. Above the multilayer, a buffer layer acts as an etch stopper, and an absorber is used for the mask pattern. If we limit our scope to layers without defect, most of the mask parts can actually be described by analytical methods such as transfer matrices. Therefore we decided to split the mask into two parts : the first part includes the absorber and the buffer layer and it will be studied using a finite-difference time-domain (FDTD) algorithm, the second part includes the reflective multilayer and the substrate and it will be simply described by transfer matrices.
The applicability and accuracy of newly developed analytical models for resist process effects are investigated. These models combine a stationary level set formulation with a lumped parameter model. They allow to propagate the 3D photoresist profile given the 3D aerial image distribution. The first model, based on the vertical propagation algorithm (VPM), takes into account the 2D intensity distribution inside the resist, including the absorption. The second model incorporates the scaled defocus algorithm (SCDF), which describes the 3D intensity of the resist, taking into account the defocus values. In this paper we investigate the applicability for any geometry, for process window determination and the accuracy by taking reference to the fully fledged simulator SOLID-C. The suggested methods allow to calculate 3D resist profile in a fast way thereby enabling the prediction of large areas.
This paper describes mask topography effects of alternating phase shift masks for DUV lithography. First two options to achieve intensity balancing are discussed. Global phase errors of +/- 10 degrees cause a CD change of 3 nm and 8 nm CD placement errors. The CD placement appears to be the parameter affected most by phase errors. A sloped quartz edge with an angle of 3 degrees causes a CD change of 10 nm. The CD sensitivity on local phase errors, i.e. quartz bumps or holes was also studied. The critical defect size of a quartz bump was seen to be 150 nm for 150 nm technology. For the investigation the recently developed topography simulator T-mask was used. The simulator was first checked against analytical tests and experimental results.
In general, simulation requires a thorough understanding of the physics and/or chemistry of the processes. This should lend itself of models which can be used to establish simulation software. In addition, for a simulation to be successful, a calibration of the model is needed. A good model using bad parameters returns bad results. In lithography simulation there are settings of parameters which are well known. Others are less known and may be hard to obtain. A typical example is the development parameters, or parameters describing the reaction mechanism for chemically amplified resist. To support the user of simulation software in the process of finding proper input parameters, the new software package FIRM has been developed and will be presented in this paper, together with applications. FIRM uses models for the optical or e-beam lithography, the same as SOLID-C and SELID, and determines any set of coefficients from given experimental observations. From an initial set of coefficients, it tries to fit calculations to observations. FIRM accepts various types of measurements, e.g. thickness tables of the resist or focus-exposure matrices. In addition, the user selects from a wide list of resist models the parameters to be refined. FIRM then tries to find correlation between the parameters and the differences between calculation and observation. In an iterative process 'best' parameters are determined. The validity of the algorithm is verified against well known test cases. Next, applications of FIRM to several new chemically amplified resists for DUV will be presented using different types of experimental input.
KEYWORDS: Mask making, Raster graphics, Photomasks, Critical dimension metrology, Data corrections, Scattering, Electron beam lithography, Sun, Control systems, Data conversion
The e-beam proximity effect is well known as one of the limiting factors in e-beam lithography. As features get smaller the need for e-beam proximity effect correction increases. There exist different approaches to cover these effects by varying dose or shape of the pattern layout during the exposure step. Whichever algorithm is used, the question of proximity effect correction gets more and more a performance problem for forefront applications like the 256 megabit and 1 gigabit chips. The correction approach has to handle large data volume in reasonable time. Key to overcome this hurdle is to include hierarchial data handling into the proximity correction algorithm, which involves hierarchical data structures as well as hierarchy reorganization methods. The goal of the present work is to perform all necessary steps in order to guarantee the accuracy of the exposure result for the 1 gigabit memory chip. One step of the preparation is the e-beam proximity correction for raster scan machines. With respect to proximity effect correction, raster scan machines have a severe drawback. The scanning speed is constant while writing the layout, i.e., dose variation is not applicable to compensate for the proximity effect. There is, however, the geometry which can be exploited as degree of freedom. Geometrical variations of the layout underlie many constraints such as neighboring features, the exposure grid of the e-beam tool and, but not least, writing time. The paper presents how to solve some of the major problems occurring when proximity effect correction becomes an unavoidable step in the mask making process. Power and application limits of proximity effect correction for raster scan machines are investigated. The exposure has been carried out on a MEBES 4500 system. Process latitude and line width linearity are presented. In addition, practical questions like file size increase due to proximity correction are investigated. Exposure results of uncorrected and corrected pattern are compared to demonstrate the necessity of the correction as well as the improvement in pattern fidelity.
Both e-beam and optical proximity effects are still a major barrier in the transfer of an ULSI design from the CAD station to the printed result on wafer. Optical proximity effect correction (OPC) is shown to be a strong tool to improve the printing latitudes for i-line lithography of 0.35 micrometers feature sizes and below, but leads to fractal geometries around 0.1 micrometers (corresponding to 0.5 micrometers on a 5x reticle). This quantum leap in required minimum linewidth on the mask may urge mask makers to apply e-beam proximity effect correction (PEC), even more than a decrease in the reticle magnification from 5x to 4x (and further) would. For raster scan e-beams, which are typically used in mask making, correction by dose variation is not practical. Hence, PEC for these systems must be tackled by modifying the geometry of the design, in a way similar to OPC techniques. Both corrections must compromise between the accuracy achieved, which is dominated by the selected (correction and exposure) grid size, and the resulting throughput loss, caused by the use of a smaller grid size. Sigma-C now introduces a new algorithm, which enables the proximity effect correction by shape variation. It is included into CAPROX and supports hierarchy in the same manner as the other postprocessing operations. The exposure of the shape corrected pattern on a raster scan machine requires only one beam pass, whereas dose variation would require one pass for each dose. Exposures were made at IMEC and at Compugraphics. The first results on Leica EBMF10.5 and MEBES III are promising. The pure shape correction increases the line width uniformity and opens the process window for critical dimensions below 1 micrometers . Performance measurements show that the 64 Mb DRAM is a job of a few hours.
The proximity effect in e-beam lithography is well known and many solutions exist to correct it. But none of them are able to cope with the amount of data in today's large scale memories. In a conventional approach, the 64 Mb DRAM would lead to 10 Gigabytes of flat data and weeks of processing time, for example. Recently, Sigma-C achieved a breakthrough in handling USLIs by developing a generic algorithm for many different hierarchical processes. It solves throughput problems for operations like overlap removal (OLR); the e-beam (EPC) and optical proximity correction (OPC) which, at first glance, are inaccessible to hierarchical processing. Hierarchical algorithms take advantage of the growing symmetry of a layout with the number of designed shapes. Even after all processing steps a ULSI device will have hierarchy, not necessarily the same as on input, but yet enough to significantly decrease processing times. Hierarchical processing is a general outline which can be used for many different applications. Most parts of this algorithmic scheme are identical, only one part must be adapted for each application. This paper shows the general outline of hierarchical processing and the solution of the algorithmic steps specific to the hierarchical e-beam proximity correction. Subsequently, the application on a variety of critical layers of the 64 and 256 Mb DRAM is demonstrated using a workstation. Corrected and uncorrected exposures are compared by SEM pictures and line width measurements. The correction not only opens the process window, it turns out to be an enabling technique for critical layers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.