In this paper we demonstrate the feasibility of Negative Tone Development (NTD) process to pattern 22nm node contact
holes leveraging freeform source and model based assist features. We demonstrate this combined technology with
detailed simulation and wafer results. Analysis also includes further improvement achievable using a freeform source
compared to a conventional standard source while keeping the mask optimization approaches the same. Similar studies
are performed using the Positive Tone Development (PTD) process to demonstrate the benefits of the NTD process.
Double patterning technology (DPT) provides the extension to immersion lithography before EUV lithography or other
alternative lithography technologies are ready for manufacturing. Besides the additional cost due to DPT processes over
traditional single patterning process, DPT design restrictions are of concerns for potential additional design costs. This
paper analyzes design restrictions introduced by DPT in the form of DPT restricted design rules, which are the interface
between design and technology. Both double patterning approaches, Litho-Etch-Litho-Etch (LELE) and Self-Aligned
Double Patterning with spacer lithography (SADP), are studied. DPT design rules are summarized based on drawn
design layers instead of decomposed layers. It is shown that designs can be made DPT compliant designs if DPT design
rules are enforced and DPT coloring check finds no odd cycles. This paper also analyzes DPT design rules in the design
rule optimization flow with examples. It is essential to consider DPT design rules in the integrated optimization flow.
Only joint optimization in design rules between design, decomposition and process constraints can achieve the best
scaled designs for manufacturing. This paper also discusses DPT enablement in the design flow where DPT aware
design tools are needed so that final designs can meet all DPT restricted design rules.
The 20nm generation for logic will be challenging for optical lithography, with a contacted gate pitch of ~82nm and a
minimum metal pitch of ~64nm. A gridded design approach with lines and cuts has previously been shown to allow
optimizing illuminator conditions for critical layers in logic designs.[1] The approach has shown good pattern fidelity
and is expected to be scalable to the 7nm logic node. [2,3,4]
A regular pattern for logic makes the optimization problem straightforward if only standard cells are used in a chip.
However, modern SOC's include large amounts of SRAM memory as well. The proposed approach truly optimizes both,
instead of the conventional approach of sacrificing the SRAM because of logic layouts with bends and multiple pitches.
We consider a design with the logic and SRAMs unified from the beginning. In this case, critical layer orientations as
well as pitches are matched and each of the layers optimized for both functional sets of patterns.
The layout for a typical standard cell using Gridded Design rules is shown in Figure 1a. The Gate electrodes are oriented
in the vertical direction, with Active regions running horizontally. Figure 1b shows a group of SRAM bit cells designed
to be compatible with the logic cell. The Gate orientation and pitch are the same.
Optimization results will be presented for the co-optimization of critical layers for the cells. The Source-Mask
Optimization (SMO) method used can optimize the illumination source [5] and mask for multiple patterns to improve the
2-D image fidelity and process window while controlling the mask sensitivity. It can incorporate the design intentions
that are implied by Gridded Design rules. SMO will be done to balance complexity of the source and the complexity of
the mask (OPC & MBSRAFs). A flexible approach to the optimization will be introduced.
We present a comprehensive study of applicability of a fast 3D mask model in the context of source-mask optimization
to advanced nodes. We compare the results of source optimization (SO) and source-mask optimization (SMO) with and
without incorporating a fast 3D mask model to the rigorous 3D mask simulations and wafer data at 22 nm technology
node. We do this comparison in terms of process metrics such as depth of focus (DOF), exposure latitude (EL), and
mask error enhancement factor (MEEF). We try to answer the question of how much the illumination shape changes
with the introduction of mask topography effect. We also investigate if the illumination change introduces any mask
complexity and at which level. Correlation between MEEF and any mask complexity due to source variation is also
explored. We validate our simulation predictions with experimental data.
KEYWORDS: Scanning electron microscopy, Calibration, Etching, Photomasks, Data modeling, Metrology, Image processing, Plasma etching, Process modeling, Systems modeling
Mask Process Compensation (MPC) corrects proximity effects arising from e-beam lithography and plasma etch
processes that are used in the photomask manufacturing. Accurate compensation of the mask process requires accurate,
predictive models of the manufacturing processes. Accuracy of the model in turn requires accurate calibration of the
model. We present a calibration method that uses either SEM images of 2-dimensional patterns, or a combination of
SEM images and 1D CD-SEM measurements. We describe how SEM images are processed to extract the contours, and
how metrology and process variability and SEM alignment errors are handled. Extracted develop inspection (DI) and
final inspection (FI) contours are used to calibrate e-beam and etch models. Advantages of the integrated 2D+1D model
calibration are discussed in the context of contact and metal layers.
We present a method for optimizing a free-form illuminator implemented using a diffractive optical element (DOE). The
method, which co-optimizes the source and mask taking entire images of circuit clips into account, improves the
common process-window and 2-D image fidelity. We compare process-windows for optimized standard and free-form
DOE illuminations for arrays and random placements of contact holes at the 45 nm and 32 nm nodes. Source-mask cooptimization
leads to a better-performing source compared to source-only optimization. We quantify the effect of typical
DOE manufacturing defects on lithography performance in terms of NILS and common process-window.
We present a methodology for building through-process, physics-based litho and etch models which result in accurate and predictive models. The litho model parameters are inverted using resist SEM data collected on a set of test-structures for a set of exposure dose and defocus conditions. The litho model includes effects such as resist diffusion, chromatic aberration, defocus bias, lens aberrations, and flare. The etch model, which includes pattern density and particle collision effects, is calibrated independently of the litho model, using DI and FI SEM measurements. Before being used for mask optimization, the litho and etch models are signed-off using a set of verification structures. These verification structures, having highly two-dimensional geometries, are placed on the test-reticle in close vicinity to the calibration test-structures. Using through-process DI and FI measurement and images from verification structures, model prediction is compared to wafer results, and model performance both in terms of accuracy and predictability is thus evaluated.
In optical proximity correction, edges of polygons are segmented, and segments are independently moved to meet line-width or edge placement goals. The purpose of segmenting edges is to increase the degrees of freedom in proximity correction. Segmentation is usually performed according to predetermined, geometrical rules. Heuristic, model-based segmentation algorithms have been presented in the literature. We show that there is an optimal and unique way of segmenting polygon edges.
A typical wiring layer of SanDisk 3-dimensional memory device includes a dense array of lines. Every other line terminates in an enlarged contact pad at the edge of the array. The pitch of the pads is twice the pitch of the dense array. When process conditions are optimized for the dense array, the gap between the pads becomes a weak point. The gap has a smaller depth of focus. As defocus increases, the space between the pads diminishes and bridges. We present a method of significantly increasing the depth of focus of the pads at the end of the dense array. By placing sub-resolution cutouts in the pads, we equalize the dominant pitch of the pads and the dense array.
Dark field Alternating Aperture Phase Shift Mask (AAPSM) technology has developed into an enabling Resolution Enhancement Technology (RET) in the sub-100nm semiconductor device era. As phase shift masks are increasingly used to resolve features beyond just the most critical (for example transistor gates on the poly layer) the probability of phase conflicts (same phase across a feature) has increased tremendously. It has become imperative to introduce design practices that enable the semiconductor fabrication to take advantage of the improved performance that AAPSM delivers. In this paper we analyze the different causes for phase conflicts and the appropriate methods for detecting them, thus building the basis for the Hybrid AAPSM compliance flow. This approach leverages the strengths of existing DRC tools and the AAPSM conversion software. The approach is effective for minimizing the area penalty, thus very effective for density driven designs. By design, it is suited for custom or semi-custom layouts.
As we approach the 65nm node, the impact of the image imbalance phenomenon in phase shift mask lithography is proving to have a serious impact on the robustness of the phase shift mask solution. In this work we describe a new concept for the phase shift imbalance correction. The method is based on an interference concept that allows the manipulation of the image intensity by placing sub resolution features within the zero phase regions. Rigorous 3D simulations illustrate the reduction in the intensity of the 0 degree phase regions to match the intensity of the 180 degree phase intensity, effectively correcting for the image imbalance. We show that the phase shift mask low sigma illumination conditions reduce the risk of printing these sub-resolution binary features increasing the flexibility to vary the size of the feature based on circumstances to fine tune the correction locally.
As we move towards smaller dimensions and denser circuits, Model Based OPC has become a critical and indispensable tool to achieve feature fidelity for random logic and very small bitcell patterns. Model-Based OPC s used to overcome the effects due to the reticle manufacturing process and the photolithography process which are essentially low pass filters, with the objective of returning the intended drawn feature on wafer within acceptable error. In this paper we demonstrate its capabilities and flexibility with the development of a mixed Model-based/Rule based OPC approach that covers all categories of features for the active layer and the heuristics that justify this approach. We discuss along with experimental results the parameterized variations that are possible with Model Based OPC (MBOPC)and the optimization required as a result within the paradigm of a 248nm-lithography process for the 0.13-micron technology. Data and manufacturability issues are discussed that are an important consideration for a feasible MBOPC solution.
Optimal translation-invariant binary windowed filters are determined by probabilities of the form P(Y equals 1|x), where x is a vector (template) of observed values in the observation window and Y is the value in the image to be estimated by the filter. The optimal window filter is defined by y(x) equals 1 if P(Y equals 1|x) (greater than) 0.5 and y(x) equals 0 if P(Y equals 1|x) (less than or equal to) 0.5, which is the binary conditional expectation. The fundamental problem of filter design is to estimate P(Y equals 1|x) from data (image realizations), where x ranges over all possible observation vectors in the window. A Bayesian approach to the problem can be employed by assuming, for each x, a prior distribution for P(Y equals 1|x). These prior distributions result from considering a range of model states by which the observed images are obtained from the ideal. Instead of estimating P(Y equals 1|x) directly from observations by its sample mean relative to an image sample, P(Y equals 1|x) is estimated in the Bayesian fashion, its Bayes estimator being the conditional expectation of P(Y equals 1|x) given the data. Recently the authors have shown that, with accurate prior information, the Bayesian multiresolution filter has significant benefits from multiresolution filter design. Further, since the Bayesian filter is trained over a wider range of degradation levels, it inherits the added benefit of filtering a degraded image at different degradation levels in addition permitting iterative filtering. We discuss the necessary conditions that make a binary filter a good iterative filter and show that the Bayesian multiresolution filter is a natural candidate.
Optimal translation-invariant binary windowed filters are determined by probabilities of the form, where x is a vector of observed values in the observation window and Y is the value in the image to be estimated by the filter. The optimal window filter is defined by y(x) equals 1 if P(Y equals 1/x) > 0.5 and y(x) equals 0 if P(Y equals 1/x) <EQ 0.5, which is the binary conditional expectation. The fundamental problem of filter design is to estimate P(Y equals 1/x) from data, where x ranges over all possible observation vectors in the window. A challenging aspect of optimal translation- invariant binary windowed filters is the implementation for large windows. In the context of Bayesian multiresolution filter design recently published by the authors, the training requirements for an accurate prior are more stringent. As such the practical feasibility of the filer design becomes an issue. This paper discusses the real time issues for large window filter design and how the bottlenecks were overcome to design practical large window multiresolution filters. The most crucial bottlenecks are the real memory required for training and the time required for training to obtain a satisfactory estimation of P(Y equals 1/x) or its prior for large windows. Among other improvements a method for data representation is developed that greatly reduces storage space for the large number of templates that occur for larger windows during the training of the filter. Parallel algorithms are designed that reduce hardware related time loss during training. In addition we take advantage of Bayesian filter methodology to train for large windows. While the algorithm works for larger windows, we demonstrate the feasibility of Bayesian multiresolution filter design for window sizes of up to 31 by 31.
KEYWORDS: Signal to noise ratio, Statistical analysis, Data analysis, Target detection, Monte Carlo methods, Glasses, Image segmentation, Signal detection, Image analysis, Biological research
Microarray technology makes it possible to monitor expression levels of thousands of genes simultaneously during single or multiple experiments. Routinely, in order to analyze gene expressions level quantitatively, two fluorescent-labeled RNAs are hybridized to an array of cDNA probes on a glass slide. Ratios of gene expression levels arising from two co-hybridized samples are obtained through image segmentation and signal detection methods. During the past three years, we have developed a gene expression analysis system in which ratio statistics have been applied to expression analysis, and a ratio confidence interval has been established to identify ratio outliers.
This paper discusses a multiresolution approach to Bayesian design of binary filters. The key problem with Bayesian design is that for any window one needs enough observations of a template across the states of nature to estimate its prior distribution, thus introducing severe constraints on single window Bayesian filter designs. By using a multiresolution approach and optimized training methods, we take advantage of prior probability information in designing large-window multiresolution filters. The key point is that we define each filter value at the largest resolution for which we have sufficient prior knowledge to form a prior distribution for the relevant conditional probability, and move to a sub-window when a non-uniform prior is not available. This is repeated until we are able to make a filtering decision at some window size with a known prior for the probability P(Y equals 1x), which is guaranteed for smaller windows. We consider edge noise for our experiments with emphasis on realistically degraded document images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.