An advanced area scaling approach for semiconductor burn-in Daniel Kurz a,⇑ , Horst Lewitschnig b , Jürgen Pilz a a Department of Statistics, Alpen-Adria-Universität Klagenfurt, Klagenfurt, Austria b Infineon Technologies Austria AG, Villach, Austria article info Article history: Received 26 May 2014 Received in revised form 5 August 2014 Accepted 2 September 2014 Available online 11 October 2014 Keywords: Area scaling Binomial distribution Burn-in Semiconductors Serial system reliability abstract In semiconductor manufacturing, early life failures are avoided by putting the produced items under accelerated stress conditions before delivery. The products’ early life failure probability p is assessed by means of a burn-in study, in which a sample of the stressed items is investigated for early failures. The aim is to prove a target failure probability of the produced devices and release stress testing of the whole population. Given the failure probability level on a reference product, the failure probabilities of so-called follower products with different chip sizes are then obtained by means of area scaling. Classically, area scaling is done with respect to the whole area of the chips. Nevertheless, semiconductors can be partitioned into different chip subsets, which can have different likelihoods of failures. In this paper, we propose a novel area scaling model for the chip failure probability p, which enables us to scale the chip subsets separately from each other. The main idea is to adapt the classical estimators of the failure probabilities of the chip partitions according to the number of failures on the different chip subsets. This leads to a more appropriate estimation of the failure probabilities of the follower products and helps to improve the efficiency of burn-in testing. Ó 2014 Elsevier Ltd. All rights reserved. 1. Introduction Semiconductor devices are used in many safety–critical applica- tions like cars, trains or medical products. For that reason, it is essential to ensure high reliability of the delivered items by screen- ing out potential early life defects before delivery. Burn-in (BI) testing is applied to reduce the increased failure rate kðtÞ at the beginning of the chips’ early life [1]. By putting the manufactured items under accelerated temperature and voltage stress conditions, early failures (infant mortalities) can be detected and weeded out. By having a look into e.g. [2–5], one can see that defect screening by means of BI is an important topic in the semiconductor industry. The interested readers can find more technical details on BI testing in [6–8]. However, full BI testing (that is, the whole population of a specific product is put under BI stress) is expensive in terms of costs, time and further resources. For that reason, the idea is to successively reduce the BI time by investigating a sample of the devices for a statistical evidence of a reduced failure rate within a certain time point of early life. In general, we distinguish between two approaches for reducing the BI duration [9]. The first approach aims at investigating the targeted failure rate on (censored) lifetime data of early failures. These data allow to estimate the parameters of the probability distribution of the ran- dom lifetime of early failures (e.g. a Weibull distribution Wbða; bÞ [10] with scale parameter a > 0 and shape parameter b < 1). Based on that, the BI time can be assessed, e.g. by inferring the 90%-quan- tile of the estimated lifetime distribution, see e.g. [11–13]. In this way, BI duration can be successively reduced. The idea of the second approach is to release full BI testing by demonstrating a target failure probability for the produced devices at a random sample out of the running production. We refer to this as BI study, in which the stressed items are investigated for BI relevant failures (e.g. particles in oxide, contact hole defects, metalization residues, etc.). More precisely, the devices undergo an electrical test after the BI stress in order to identify potential BI related defects. Those failures, which are electrically confirmed, are then physically analyzed. In this failure analysis, the root cause, the failure location on the chip and from this, the failures’ BI relevance, are identified. Finally, the product’s failure probability p can be estimated based on the observed number of BI relevant failures. If the obtained estimate is below the predefined target failure probability, full BI testing is released and a BI monitoring procedure is initiated. However, note that this approach assumes fixed failure read-out times, which are based on a predefined life- time distribution of early failures. In [14], we present an approach http://dx.doi.org/10.1016/j.microrel.2014.09.007 0026-2714/Ó 2014 Elsevier Ltd. All rights reserved. ⇑ Corresponding author at: Department of Statistics, University Street 65-67, 9020 Klagenfurt, Austria. Tel.: +43 463 2700 3141. E-mail address: Daniel.Kurz@aau.at (D. Kurz). Microelectronics Reliability 55 (2015) 129–137 Contents lists available at ScienceDirect Microelectronics Reliability journal homepage: www.elsevier.com/locate/microrel