In urgent post tape-out, runtime is a big challenging for backend layers’ OPC (Optical Proximity Correction). Sometimes there is no extra time for OPC to clean all the ORC (Optical Rule Check) hotspots with recipe tuning. So the repair flow is the good choice, the repair flow is costly final step, especially for Metal layer’s. In some challenging case, there will be thousands or millions hotspots which can’t pass various ORC criteria need to be into repair flow, thus not only makes our system overloading and the runtime is unacceptable. There are many applications for Machine Learning (ML) in IC field, such as ML-OPC, ML-Hotspot detection and ML-SRAF etc. And Calibre ML-OPC has powerful functionality which fits the requirement of repair flow very well. In this paper, we will introduce how to use Calibre ML-OPC to reduce the most of the defects to speed up the repair process and demonstrate the benefit comparing to the traditional repair flow.
Resolution enhancement technique requirements drive continuously increasing mask pattern complexity which creates an increasingly serious problem of ever longer mask writing times. Shot count is highly correlated to the writing time of VSB writing tools. Therefore shot count management is one proposed direction for practical writing time reduction. Shots may be induced from small features and pattern jogs due to OPC and other factors. Optimizing shot count without compromising pattern quality is required for the optimization solution. An effective shot count management solution is tested in this paper to remove the small-sized shots produced by misaligned vertices on both pattern edges under certain conditions. We demonstrate a post-OPC small shot removal solution with vertex alignment before pattern fracture. The shot optimization induced mask pattern differences are verified in the flow to ensure that they do not violate OPC requirements. As shot counts are reduced by the solution compared with the original pattern, the efficiency of the shot count reduction and writing time savings are evaluated. Meanwhile the pattern quality is also evaluated to ensure no degradation using both a mask level and wafer level check.
With shrinking nodes, as the layout patterns are becoming more and more complicated, OPC accuracy and performance is becoming increasingly challenging. While we are trying to perfect our OPC script to have a clean output without weak points, in a real urgent tape-out scenario, often there will be weak points and we cannot afford the cost to run the OPC again with an updated OPC recipe. Naturally the post OPC repair becomes the only cost-effective choice. The paper studies and compares a few methods for the post OPC weak-points repair: the manual OPC repair flow and traditional repair flow based on the DRC commands. Here, we introduce a novel method based on the eqDRC commands, which are widely used in the design house but have never been used in the post OPC flow. We discuss how to apply the eqDRC into the post OPC repairs and demonstrate its advantages over the traditional methods.
Semiconductor manufacturing technologies are becoming increasingly complex with every passing node. Newer technology nodes are pushing the limits of optical lithography and requiring multiple exposures with exotic material stacks for each critical layer. All of this added complexity usually amounts to further restrictions in what can be designed. Furthermore, the designs must be checked against all these restrictions in verification and sign-off stages. Design rules are intended to capture all the manufacturing limitations such that yield can be maximized for any given design adhering to all the rules. Most manufacturing steps employ some sort of model based simulation which characterizes the behavior of each step. The lithography models play a very big part of the overall yield and design restrictions in patterning. However, lithography models are not practical to run during design creation due to their slow and prohibitive run times. Furthermore, the models are not usually given to foundry customers because of the confidential and sensitive nature of every foundry's processes. The design layout locations where a model flags unacceptable simulated results can be used to define pattern rules which can be shared with customers. With advanced technology nodes we see a large growth of pattern based rules. This is due to the fact that pattern matching is very fast and the rules themselves can be very complex to describe in a standard DRC language. Therefore, the patterns are left as either pattern layout clips or abstracted into pattern-like syntax which a pattern matcher can use directly. The patterns themselves can be multi-layered with "fuzzy" designations such that groups of similar patterns can be found using one description. The pattern matcher is often integrated with a DRC tool such that verification and signoff can be done in one step. The patterns can be layout constructs that are "forbidden", "waived", or simply low-yielding in nature. The patterns can also contain remedies built in so that fixing happens either automatically or in a guided manner. Building a comprehensive library of patterns is a very difficult task especially when a new technology node is being developed or the process keeps changing. The main dilemma is not having enough representative layouts to use for model simulation where pattern locations can be marked and extracted. This paper will present an automatic pattern library creation flow by using a few known yield detractor patterns to systematically expand the pattern library and generate optimized patterns. We will also look at the specific fixing hints in terms of edge movements, additive, or subtractive changes needed during optimization. Optimization will be shown for both the digital physical implementation and custom design methods.
As the IC industry moves forward to advanced nodes, especially under 28nm technology, the printability issue is becoming more and more challenging because layouts are more congested with a smaller critical feature size and the manufacturing process window is tighter. Consequently, design-process co-optimization plays an important role in achieving a higher yield in a shorter tape-out time. A great effort has to be made to analyze the process defects and build checking kits to deliver the manufacturing information, by utilizing EDA software, to designers to dig out the potential manufacturing issues and quickly identify hotspots and prioritize how to fix them according to the severity levels. This paper presents a unique hotspot pattern analysis flow that SMIC has built for advanced technology to analyze the potential yield detractor patterns in millions of patterns from real designs and rank them with severity levels related to real fab process. The flow uses Mentor Graphics Calibre® PM (Calibre® Pattern Matching) technology for pattern library creation and pattern clustering; meanwhile it incorporates Calibre® LFD (Calibre® Litho-Friendly Design) technology for accurate simulation-based lithographic hotspot checking. Pattern building, clustering, scoring, ranking and fixing are introduced in detail in this paper.
As the advent of advanced process technology such as 65nm and below, the designs become more and more sensitive to the variation of manufacturing process. Though the complicated design rules can guarantee process margin for the most layout environments, some layouts that pass the DRC still have narrow process windows. An effective layout optimization approach based on Litho Friendly Design (LFD), one of Mentor Graphics' products, was introduced to enhance design layout manufacturability. Additional to process window models and production proven Optical Proximity Correction (OPC) recipes, the LFD design kits are also generated and needed, which with the kits and rules people should guarantee no process window issues in a design if the design passes the check of these rules via the kits. Lastly, a real 65nm product Metal layer was applied full chip OPC and post-OPC checks to process variation. Some narrow process window layouts were detected and identified, then optimized for larger process window based on the advices provided by LFD. Both simulation and in-line data showed that the DOFs were improved after the layout optimization without changing the area, timing and power of the original design.
With the critical dimension of IC design decreases dramatically, to meet the yield target of the manufacture process, resolution enhancement technologies become extremely important nowadays. For 90nm technology node and below, sub rule assistant feature (SRAF) are usually employed to enhance the robustness of the micro lithography process. SRAF is really a powerful methodology to push the process limit for given equipment conditions. However, there is also a drawback of the SRAF. It is very hard to check the reasonability of the SRAF location, especially when SRAF is applied on full chips. This work is trying to demonstrate a model-based approach to do full-chip check of the SRAF insertion rule. First, we try to capture the lithography process information through real empirical wafer data. Then we try to check every SRAFs location and to find any hot spot that has the risk of being printed out on the wafer. Based on this approach, we can then not only apply full chip check to reduce the printability of SRAF. Furthermore, combined with DRC tools, we can find SRAFs that are inserted unreasonably and then apply modification on them.
Model-Based OPC has become a standard practice and centerpiece for 130nm technology node and below. And every model builder is trying to setup a physically realistic model that is adequately calibrated contains the information which can be used for process predictions and analysis of a given process. But there still are some unknown/not-well-understood physics in the process such as line edge roughness (LER). The LER is one of the most worrisome non-tool-related obstacles faced by next-generation lithography. Nowadays, considerable effort is devoted to moderating its effects, as well as understanding its impact on devices. It is a persistent problem for 193 nm micro-lithography and will carry us for at least three generations, culminating with immersion lithography. Some studies showed LER has several sources and forms. It can be quantified by an LER measurement with a top-down CD measurement. However, there are other ways in which LER shows up, such as line breakage results from insufficient resist or mask patterning processes, line-width aspect ratio or just topography. Here we collected huge amount of line-width ADI CD datasets together with LER for each edge. And try to show even using the average value of different datasets will take the inaccuracy of measurement into the modeling fitting process, which makes the fitting process more time consuming and might cause losing convergence and stableness. This work is to weight different wafer data points with a weighting function. The weighting function is dependent on the LER value for each One-dimension feature in the sampling space of the modeling fitting. By this approach, we can filter wrong information of the process and make the OPC model more accurate. Further more, we will introduce this factor (LER) into variable threshold modeling parameters and see its differentiations between other Variable Threshold model forms.
Semiconductor foundries need to have a single, standard mask preparation procedure to deal with the large number of designs they receive. This data is typically of two sorts; the random logic over which they have little control of how the design intent is represented; and cells from dense arrays such as memory, often with design rule violations, whose OPC correction needs to be precisely optimized to achieve best yield and device performance. Occasionally the input data will contain sub-resolvable notches and extensions, which while not violating DRC specifications, would, if filled, result in DRC violations. This may be due to a non DFM aware automated layout tool, or a designer aggressively trying to minimize circuit density. In practice it is worthwhile to clean up these notches to ease OPC correction. Doing this should not result in printability errors as these notches typically represent a more complex curved design intent that cannot be accurately represented due to the restrictions imposed by the limited number of polygon edge directions available for layout. Similarly, memory cell layouts often have significant implied curvature. These may only be corrected properly if the OPC target point is defined precisely for each individual segment. In general, letting the OPC correction engine correct a layout defined by a realistic, curved target shape gives better quality corrections with greater process window. The challenge for the OPC engineer working in a foundry is therefore to determine a clean-up methodology for incoming data and to correctly apply the design intent, where necessary, from the original pre-cleanup data. A programmable OPC engine gives the user flexibility in optimizing the set of rules embedded in the OPC cleanup and correction recipes. These embed within them the algorithms to interpret the rounding on the desired silicon image not only for line-ends and corners of random logic but also the more complex curved silicon images and tolerances required by memory cells.
As semiconductor manufacturing moves to the 90nm node and below, shrinking feature sizes and increasing IC complexity have combined to significantly stretch out the time needed to optimize and qualify process anchored OPC models and recipes. Process distortion and non-linearity become non-trivial issues and conspire to reduce the quality of the resulting corrections. Additionally, optimizing the OPC model and recipe on a limited set of test chip designs may not provide sufficient coverage across the range of designs to be produced in the process. Finally, the increased complexity of the transformation of the target pattern into a corrected mask pattern also increases the probability of system lithography errors. Fatal errors (pinch or bridge) or poor CD distribution may still occur. As a result, more than one reticle tape-out cycle is non uncommon to prove models and recipes that approach the center of process for a range of designs. In this paper, we describe a full-chip simulation based verification flow using a commercialized product that serves both OPC model and recipe development as well as post OPC verification after production release of the OPC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.