JOURNAL OF ELECTRONIC TESTING: Theory and Applications 19, 325–340, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands.
Replacing IDDQ Testing: With Variance Reduction C. THIBEAULT ´ Department of Electrical Engineering, Ecole de Technologie Sup´erieure, Montr´eal, Qu´ebec, Canada
[email protected]
Received July 11, 2001; Revised August 27, 2002 Editor: P.C. Maxwell
Abstract. This work is part of our effort to find an alternative to IDDQ testing. Specifically, this paper presents our variance reduction post-processing approach in order to replace IDDQ . It describes our test procedure based on Delta IDDQ histograms. It shows how this test procedure can help to reduce variance, optimize test resources and reduce the impact of process drifting and resolution loss caused by the expected IDDQ growth. Another practical aspect is discussed, namely the use of the proposed test procedure in a production test. We propose a new distribution model and revisit some experimental data, which provides a better understanding of the relationship between defect and fault. The results obtained so far confirm the pertinence of our test approach and the necessity of keeping current testing alive. Keywords: (Delta) IDDQ testing, HBTP, test, variance reduction
1.
Introduction
It is now generally accepted that IDDQ testing will not be applicable in its current form in the near future and has to be replaced or at least complemented. The causes for its lost efficiency are well known and well documented in the literature [1–5]. In summary, in addition to the continuous increase in the number of transistors, the scaling of CMOS devices, through a decrease in VDD , causes: • an increase in the leakage current, in terms of its mean and its variance, and • a decrease in the additional amount of current caused by a (bridging) fault, which is proportional to VDD , and inversely proportional to the parasitic bridging resistance, which tends to increase [6]. The reduced efficiency of IDDQ should greatly impact how the test is performed since it remains one of the most efficient ways of detecting bridging faults
and of compensating for the inaccuracies of the stuckat-fault model [7–11]. This explains why research has been undertaken to replace IDDQ . Some current-based alternative solutions are emerging. They can be classified as: • Single-threshold IDDQ with a modified environment/design to reduce leakage current: this includes solutions like substrate bias, lower temperature, lower Vdd , power-supply partitioning (chip level) [12]; • Dynamic-IDD -based solutions: this includes solutions using transient current measurements with different post-processing techniques [13–17]; • Solutions based on post-processing of IDDQ measurements [1–3, 5, 18–27]. By exploring in more detail the last class of solutions, the following alternatives can be found. Maxwell et al. [20] proposed the use of current ratios, Rc , for production current testing. This technique provides interesting
Thibeault
results when applied to today’s technologies. However, it will soon suffer from a lack of sensitivity with respect to the additional average amount of current caused by defects. Let def be this amount of current. Estimating the sensitivity, S, of Rc with respect to def leads to [28]: S=
def , IG + def
(1)
where IG is the average IDDQ of a good IC. The term def is expected to decrease while IG is expected to increase [4], such that S will be continuously decreasing as technology scales down. Current signatures [27] were also proposed as a post-processing technique, where IDDQ measurements are ordered by their magnitude. Jandhyala et al. [21] presented a clustering based approach. Their results look promising; however their approach is highly sophisticated and requires a very solid theoretical background to be understood and correctly applied. This may significantly slow down its acceptance and use in production testing environments. Another clustering approach, based on linear regression, was also proposed in [24]. One alternative that has received a lot of attention lately is Delta IDDQ testing [1–3, 5, 18, 19]. Delta (or differential, ) IDDQ is derived from IDDF testing [22]. It was first applied for diagnosis [29, 30], then used for testing purposes [18, 23]. Delta IDDQ leads to a better test quality than IDDQ because it allows a reduction current variance [23], which has been identified as the major problem in current testing [3, 23]. In this paper, we define a Delta IDDQ measurement at a given test pattern as the IDDQ measurement obtained at the same pattern minus the previous IDDQ measurement. Other post-processing alternatives tackle the challenge from a variance reduction standpoint. In [25], Variyam proposed the use of residuals of current predictions based on linear regression to reduce variance and consequently increase IDDQ resolution, while [26] proposed the use of residuals of current predictions based on IDDQ measurements from the nearest neighbors. In this paper, we present and analyze in detail a test approach also based on variance reduction strategies. The paper is structured as follows. The next section describes the benefits of variance reduction. As discussed there, variance comes from different sources. Therefore the reduction strategies will be applied at the different corresponding levels. The proposed approach, which is
an extension of Delta IDDQ , is described in Section 3. Section 4 shows how this approach can be used to optimize test resources. In Section 5, we show that our approach helps to reduce the impact of the process drifting and resolution loss caused by the anticipated IDDQ increase. Its insertion into a production test is discussed in Section 6. In Section 7, we investigate the defect vs. fault relationship, by proposing a distribution model and reexamining the Sematech data1 [31, 32] from a different perspective. We discuss the next challenge for current testing in Section 8, and we conclude in Section 9. 2.
Variance Reduction
The rationale behind any variance reduction strategy can be easily understood when looking at Fig. 1, where the situation is depicted as two overlapping distributions, one for defect-free ICs and one for faulty ones. This representation is of course a simplified view of reality [33] (we reexamine it in Section 7) but it is helpful to understand the impact of the technology scaledown. Under this representation, the overlapping area becomes the probability of a poor test decision, namely, declaring a good IC faulty (yield loss) or calling a bad one good (test escape). With the trends described in the previous section, it is clear that this overlapping area is increasing with scaling since both distributions are closer and closer while their variance is increasing. The overlapping area mainly depends on: • the distance between the two distributions, and • the variance of each distribution. 1 Good ICs Bad ICs
0.9 0.8 Probability-density function
326
0.7 0.6 0.5 0.4 0.3 0.2 0.1
A
0 0
Fig. 1. tions.
1
2
B
x current (uA)
3
4
5
Symbolic representation of overlapping current distribu-
Replacing IDDQ Testing: With Variance Reduction
• the IC-to-IC leakage current variations, • the pattern-to-pattern current variations, and • the variations caused by the measurement equipment itself.
35
30
25
Occurence
The distance between the two (closest) current distributions is a function of the value of Vdd and the actual value of the resistive path between Vdd and GND, meaning that we have basically no control over this first parameter (beyond the actual applicable Vdd upper limit). However, theory tells us that post-processing can help to reduce the variance. The main variance sources are:
20
15
10
5
0 -2
Analyzing Sematech experimental data revealed that the most important variance source was the IC-to-IC leakage current variations [23]. This explains the interest in Delta IDDQ since it practically eliminates these variations at the expense of doubling (in theory; in practice we obtained a ratio of 1.7 [23]) the variance due to the two other sources. But in the end it pays since a 10-fold variance reduction was reported in [23] based once again on Sematech data. The elimination of the IC-to-IC variations directly impacts current distributions like the ones in Fig. 1. As they were shown to be independent of the other types of variations, their elimination results in a variance (e.g., the one for the good ICs) which is about equal to the original one minus their related one. From a variance reduction standpoint, Delta IDDQ represents a good starting point. However, additional variance reduction can be obtained, as shown in the next section where a Histogram-Based Test Procedure (HBTP) is presented. 3.
Histogram-Based Test Procedure
This test approach was first introduced in [18]. We summarize it in this section. This approach is an extension of Delta IDDQ . With HBTP, Delta IDDQ is considered as a first step in reducing current variance, to which another current variance reduction technique is added, namely the use of histograms. A typical Delta IDDQ histogram is shown in Fig. 2, obtained from real measurements (Sematech Data). One can notice a first peak centered at 0, which is always there, with or without a defect or fault. The presence of a defect leading to a fault causing an abnormal elevated IDDQ level adds symmetrical peaks in the Delta IDDQ histogram.
327
Fig. 2.
-1
0 Delta Iddq (uA)
1
2
Delta IDDQ histogram example (one chip).
Since Delta IDDQ practically eliminates IC-to-IC variations, the main objective of using histograms is to reduce variance from the two other sources, namely pattern-to-pattern and measurement variations. Therefore, the variation reduction will occur at this level, on the variation affecting the final single (Delta IDDQ ) maximum value. Such a reduction is possible due to the “averaging effect” of the histogram peaks. Indeed, each histogram peak is a collection of (Delta IDDQ ) measurements and its abscissa value is a good estimate of def . In theory [34], since using this abscissa value is similar to averaging the x-value of measurements making up the peak, the variance associated with the estimation of def is reduced by a factor equal to the number of measurements making up the peak. For this reason, we exploit the symmetry and build histograms using | IDDQ | instead of IDDQ since it doubles the number of points, providing an additional variance reduction factor of 2. The abscissa (absolute) value of these peaks corresponds to the additional current caused by the fault, def . Therefore, HBTP consists of estimating def by: 1) building a | IDDQ | histogram (for each tested IC), and 2) identifying the highest peak abscissa value. This value is then compared to a single (Delta IDDQ ) threshold. To identify this peak value, the following procedure can be applied: there is a peak at the jth bar of a histogram if: • histo( j −1) < histo( j) > histo( j +1) or histo( j − 1) < histo( j) = histo( j + 1) > histo( j + 2),
328
Thibeault
Occurrence
IC 1
IDDQ max. Relative Occurrence
Occurrence
IDDQ
IC 1
|Delta IDDQ| max.
Occurrence
|Delta IDDQ|
Relative Occurrence
IC 2 IDDQ
Occurrence
where histo( j) is the number of occurrences of the |IDDQ | corresponding to the jth bar of the histogram, shb val the smallest histogram bar value between histo( j) and histo(0), and mindiff val the necessary minimum difference (threshold) value between histo( j) and shb val to have a peak. This last condition is there to avoid false peak detection caused by small glitches sometimes present on the central peak. So far, we have used a mindiff val value of 2, which was empirically set. Picking the size of the histogram bins may also be an iterative process. In this work, the bin size was empirically set at 0.4 µA (namely 4 times the measurement resolution during the Sematech experiment, for the ICs with IDDQ values of 4 µA or less). In addition to the variance reduction on the estimation of the additional current caused by an active defect, the use of the histogram peaks will also contribute to reducing the Delta IDDQ value associated with the nondefective ICs, where there is no additional peak and where the estimated value of def will in theory be 0 since this is the value of the only peak present in the histogram. Fig. 3 provides an example illustrating the different variance reductions obtained first by using Delta IDDQ , and then HBTP. On the left side, there are three symbolic distributions (from a hypothetical set of tested ICs, one value taken per IC), from top to bottom: one for the maximum IDDQ value, one for the maximum |Delta IDDQ | value, and one for the HBTP peak value. On the right side, there are four symbolic distributions, built from measurements taken from two hypothetical ICs (IC-1, IC-2): for each IC, there is an IDDQ and a Delta IDDQ value distribution. In our example, IC-1 has a lower maximum IDDQ value than IC-2, but is affected by a weak bridging defect, producing two distinguishable peaks, while IC-2 has a higher but tolerable background current. In the overall maximum IDDQ value distribution (top/left), IC-1 is associated with a bin whose value is smaller than the one associated with IC-2, as indicated by the related dashed arrows. Delta IDDQ helps to put them in the right order, as they appear in the overall maximum |Delta IDDQ | value distribution (top/center). Finally, HBTP pushes the bin linked to IC-2 even further to the left, since no peak is detected in the IC-2 |Delta IDDQ | distribution (bottom/right). The impact of the variance reduction provided by HBTP on the current distributions for a complete set of tested ICs (e.g., Fig. 1) is not as straightforward as the
IC 2 |Delta IDDQ|
HBTP peak max.
Fig. 3. Symbolic distributions: IDDQ max., |Delta IDDQ | max., and HBTP peak max. for a complete set of hypothetical ICs (left side, from top to bottom); IDDQ and |Delta IDDQ | for two hypothetical ICs (right side).
one produced by Delta IDDQ itself. Nevertheless, it positively affects these distributions, as shown in Fig. 4. Three distributions, made of actual measurements, are shown: the maximum IDDQ value for each IC, the maximum |Delta IDDQ | value for each IC, and the maximum 0.8 Iddq Max Delta Iddq Max Histogram Peak Max
0.7 0.6
Relative Occurrence
• histo( j) − shb val >= mindiff val,
Relative Occurrence
and if:
0.5 0.4 0.3 0.2 0.1 0 0
0.2
0.4
0.6
0.8 1 1.2 Measure (uA)
1.4
1.6
1.8
2
Fig. 4. Distribution of maximum values (IDDQ , Delta IDDQ , and Histogram Peak).
Replacing IDDQ Testing: With Variance Reduction
Delta IDDQ histogram (HBTP) peak value for each IC. These curves were obtained using Sematech data for all ICs with a maximum IDDQ value of 5.0 µA (a total of 12029 samples that are likely to be good ICs). The Delta IDDQ histogram peak values were obtained by using the previously-described procedure with a mindiff val value of 2. This graph is a very clear illustration of the benefits provided by HBTP over Delta IDDQ testing (and IDDQ testing). First it reduces the distribution variance, then it practically eliminates the 0.2 µA peak of the Delta IDDQ distribution (likely to come from normal IDDQ consumption, since there is a strong correlation between the peak appearance and a few test patterns [18], see also Section 6), which at the same time increases the distance between good and defective IC distributions. 4.
Optimizing Test Resources
In this section, we show how HBTP can help to optimize test resources. We show that using more IDDQ test vectors of a shorter duration (settling time) while keeping the total test time constant can lead to a better test quality by means of a reduced variance. Moreover we show that using more IDDQ test vectors of a shorter duration while keeping the total test quality (variance) constant can lead to a shorter total test time. Furthermore, we also introduce the concept of IDDQ partitioning to increase the test quality. In the following, we assume that the IDDQ test time is simply a function of the number of tests and individual test times, which means that we disregard the additional fixed overhead that may exist during testing. 4.1.
Modeling Variance Sources
To understand how the optimization strategy works, we first have to model the variance sources. When using IDDQ (and HBTP), the main variance sources are the vector-to-vector variations and the measurement variations. Both have been shown to follow Gaussian distribution [23]. Assuming that they are independent, we obtain 2 2 2 σ,total = σ,vect + σ,meas ,
(2)
2 where σ,total is the IDDQ total variance (for a sin2 gle IC), σ,vect the IDDQ vector-to-vector variance, 2 and σ,meas the IDDQ measurement variance. Using
329
Table 1. Variance ratio value estimation for a 100 nA current measurement accuracy. 2 σ,vect (µA2 )
2 σ,meas (µA2 )
2 σ,meas / 2 σ,total
1
9e–03
2.22e–03
0.20
10
9e–02
2.22e–03
0.02
Gates (e + 06)
2 2 the Sematech data, we estimated the σ,meas /σ,total ratio value to quantify the relative importance of the 2 value came from a measurement variance. The σ,total 2 variance analysis [23], while the σ,meas value was obtained by estimating the IDDQ variance on a particular vector (the one showing the weakest variation) with a very limited number of ICs, and assuming that the other variation sources were negligible, meaning that 2 the actual σ,meas value is smaller than or equal to the estimate. The ratio value obtained was in fact less than 2%, which shows that the vector-to-vector variations were (for the Sematech experiment) much more important than the measurement variations. This constitutes a key observation in understanding the strategy. Table 1 lists ratio values estimated using measurements taken from a 0.35 µm CMOS IC monitor for the estimation of the Ioff (leakage current) variance per transistor, and assuming that 25% of transistors were switching, 4 transistors per gate, Ioff variance independence, and a 100 nA current measurement accuracy. Interestingly, a 2% ratio was also obtained with 10 M gates. This relatively low importance of the measurement variance can be exploited when the histogram-based method is used (remember that a variance value associated with a peak is reduced by a factor of N when the number of samples used to build the peak is multiplied by the same factor N). The idea would be to use N times more vectors, whose duration would be reduced by the same factor to keep a constant total test time. To take advantage of the Gaussian behavior and the variance reduction, it is important not to repeat the same set of vectors N times but to use additional test patterns 2 that are different from the original set. Let σ,N be the resulting IDDQ total variance. This variance is associated with histogram peaks, in particular those caused by defects. It represents the remaining variance associated with the estimation of the additional amount of current caused by a given defect. It may be expressed as:
2 σ,N =
2 1 2 , × σ,vect + σ,meas,N N
(3)
330
Thibeault
where the 1/N reduction factor is provided by the presence of N times more points in each histogram peak (and the so-called peak averaging effect), and where 2 σ,meas,N is the resulting IDDQ measurement vari2 ance. The σ,meas,N term is derived below. As it can 2 be estimated from this derivation, σ,meas,N is propor2 2.8 tional to N , meaning that at one point σ,N will start to increase with N and that there exists an optimal value 2 of N minimizing σ,N , as shown in Section 4.3. The 2 term σ,vect represents vector-to-vector variation and is therefore not directly affected by the reduction of 2 settling time, contrarily to σ,meas,N . 2 To complete our derivation and define the σ,meas,N term, we need to establish a relationship between the duration (settling time) of a vector and measurement variance. In [35], Crapuchettes presented a theoretical analysis linking IDDQ measurement accuracy to IDDQ test settling time. This curve (equation) may be expressed as: log TS,N = m log A I,N + b,
(4)
where TS,N is the resulting (reduced) settling time, A I,N the resulting IDDQ measurement accuracy, m the slope equal to −0.71, and b the offset equal to −9.1. This curve was validated with experimental data [36]. We complete the settling time vs. variance relationship by assuming that IDDQ measurement accuracy is equal to 3 times the IDDQ measurement standard deviation, and that IDDQ measurement variance is, as suggested by theory, twice the IDDQ measurement variance. 2 More specifically, σ,meas,N may be expressed as: 2 2 σ,meas,N = 2σ I,meas,N ,
(5)
gives
2 σ,N
4.2.
2 1 2 TS,1 m 2 . = × σ,vect + N 9 N 10b
Delta-IDDQ Partitioning
Assuming that the original set of vectors (when N = 1) provides a satisfactory coverage, one can take advantage of the extra vectors (when N > 1) for other purposes. One very interesting possibility is to use them to reduce internal switching from one vector to another. In the following, we present a strategy to achieve this goal, called IDDQ partitioning. The concept of IDDQ partitioning is simple. Since we are using IDDQ , any part of an IC that is kept quiet (causing only DC current, no transitions) does not con2 tribute to the increase in σ,vect . Therefore, the idea is to limit input/state transitions into one of the N IC partitions, while the other are kept quiet. One easy way of implementing IDDQ partitioning is to use (or take advantage of existing) multiple scan-chains, as illustrated in Fig. 5. If an IC already contains such multiple scan-chains, the only hardware overhead is the circuitry required to control each scan-chain independently. Assuming the independence of IDDQ variances of 2 transistors, and that σ,vect is the sum of these independent variances, then (8) becomes:
2 σ,N
2 2 σ,vect 2 TS,1 m 1 , × + = N N 9 N 10b
2 that is, the term σ,vect is now divided by N.
2 where σ I,meas,N , the resulting IDDQ measurement variance, may be expressed as
2 σ I,meas,N =
1 2 A . 9 I,N
(6)
Finally, from (4), A I,N may be expressed as A I,N =
TS,1 N 10b
m1
,
(7)
where TS,1 = N TS,N , is the original test vector settling time. Substituting the previous equations into (3)
(8)
PARTITION 1 Fig. 5. chains.
COMB. LOGIC PARTITION N
Delta-IDDQ partitioning with multiple scan-
(9)
Replacing IDDQ Testing: With Variance Reduction
Table 2. Optimal values of N and corresponding variance factors, with different ratio values. Without part. (8) Ratio
N opt.
Variance factor
Table 3. Optimal values of N and corresponding variance reduction factors, with different IDDQ measurement accuracy.
With part. (9) N opt.
Variance factor
331
Without part. (8)
With part. (9)
IDDQ Acc. (µA)
N opt.
Variance factor
N opt.
Variance factor 0.264
0.01
4
0.372
3
0.184
0.02
3
0.474
3
0.257
0.01
3
0.481
3
0.381
0.10
3
0.474
3
0.257
1.0
3
0.472
3
0.254
0.04
3
0.615
2
0.05
2
0.652
2
0.414
0.08
2
0.743
2
0.513
10
3
0.474
3
0.257
100
3
0.483
3
0.266
0.10
2
0.803
2
0.578
0.16
2
0.985
2
0.775
0.20
1
1.000
2
0.906
(when N = 1). Equations (8) and (9) (without partitioning and with partitioning) respectively become: 4.3.
Maximizing Test Quality for a Given Total Test Time
Equations (9) and (8) can be used to maximize the test quality for a given total test time (with and without partitioning) by finding the value of N that min2 imizes the value of σ,N . Table 2 presents some results obtained with and without partitioning, for dif2 2 ferent σ,meas /σ,total ratio values. These results were obtained by assuming a 100-nA IDDQ measurement accuracy. The variance factor is the ratio of the resulting 2 2 σ,N value (@ N opt.) over the original σ,N value (@ N = 1). These results reveal that significant variance reduction can be obtained, especially for low ratio values, and that these reductions are amplified by IDDQ partitioning. Such results clearly validate the use of shorterduration test vectors as well as the benefits of IDDQ partitioning, knowing that any variance reduction leads to a decrease in yield loss and test escape [23]. Note that IDDQ measurement accuracy has a very limited effect on 2 2 the results, as shown in Table 3 (where a σ,meas /σ,total ratio of 0.02 is assumed). 4.4.
Minimizing the Total Test Time for a Given Test Quality
When the test quality level is satisfactory, one can take advantage of the reduced test vector duration to speed up the entire current test. To quantify speed-up gains, we have to modify Eqs. (8) and (9). Let K be the proportion of the total number of test vectors that are actually applied, this total being equal to N × N V , where N V is the number of vectors in the original set of vectors
2 σ,N
2 1 2 TS,1 m 2 , w/o part., = × σ,vect + NK 9 N 10b (10)
and 2 σ,N
2 2 σ,vect 1 2 TS,1 m , with part. = × + NK NK 9 N 10b (11)
Minimizing the total test time is equivalent to find2 ing the lowest value of K with which the original σ,N value (@ N = 1) is maintained. Some results are presented in Table 4. Similarly to Table 2, these results were obtained with and without partitioning, for dif2 2 ferent σ,meas /σ,total ratio values, and by assuming a 100-nA IDDQ measurement accuracy. Note that the value of N associated with the optimum value of K may differ from the one presented in Table 2. Table 4.
Optimal values of K , with different ratio values.
Ratio
K opt., w/o part. (10)
K opt., with part. (11)
0.01
0.372
0.314
0.02
0.475
0.402
0.04
0.612
0.504
0.05
0.652
0.559
0.08
0.743
0.642
0.10
0.804
0.683
0.16
0.985
0.821
0.20
1.000
0.923
332
Thibeault
Once again, these results underline the very high relevancy of the strategies proposed in this paper. For ex2 2 ample, it means that with a σ,meas /σ,total ratio of 0.02, using shorter-duration test vectors along with IDDQ partitioning can lead to a total current test time which is 40.2% shorter than the original one (without partitioning, with N = 1), without any decrease in test quality.
5.
Process Drifting and Resolution Loss Effect
It is expected that normal Delta IDDQ values will on average grow with the anticipated IDDQ increase [20] and that their variance, from IC to IC, will be amplified by process drifting, defined here as variations in processing, which affect the leakage cuurent. Moreover, the same IDDQ growth will cause a resolution loss by increasing the dynamic range required to make measurements. In this section, we show how HBTP can help to reduce their impact.
5.1.
Reducing Process Drifting and Resolution Loss Effect
Fig. 6 shows an example of how process drifting can affect current measurements, in the form of a folded IDDQ histogram (resolution loss has a similar effect). This histogram was built using |IDDQ | measurement values from one IC (Sematech data). This histogram can be compared to the (unfolded) one in Fig. 2, which is more representative of the average
case and where the central peak is between −0.1 and +0.1 µA. Basically, process drifting and resolution loss dilate the x-axis, such that it becomes difficult to determine whether or not there are additional peaks caused by defects (a more detailed analysis is presented in [36]). HBTP searches for the peak with the highest corresponding average IDDQ value in the IDDQ histogram (folded or not) built from IDDQ measurements of a tested IC to determine whether the CUT is defective or not (by comparing this peak to a IDDQ threshold). In this context, process drifting and resolution loss may cause the method to be pessimistic (rejecting good ICs). It is therefore important to determine a more suitable histogram bin size. In order to reduce the dilatation effect, we need to quantify it. Previous results [36] showed that the standard deviation of the Delta IDDQ histogram central peak, σ , was a good indicator of this x-axis dilatation, and we developed a simplified way to estimate it. The rationale behind this simple estimation is illustrated by the histograms (distributions) in Fig. 7. Distributions are represented by triangles (for simplicity), but are assumed to be Gaussian [23]. The standard deviation for each IC is estimated by considering only the IC IDDQ measurements whose absolute value is smaller than half of the highest IDDQ absolute value (|max /2|). For good ICs (top graph in Fig. 7), if all IC current measurements are smaller than 4σ , then |max /2| = 2σ , and therefore more than 95% of the measurements are in theory used for the estimation, leading to an estimate that is slightly lower than the real value. For defective ICs with additional peaks (see bottom graph
14
12
Occurrence
10
8
6
4
2
0 0
0.5
1
Fig. 6.
1.5
2
2.5 3 Delta Iddq (uA)
3.5
Folded Delta IDDQ histogram.
4
4.5
5
Fig. 7.
Simplified distribution examples.
Replacing IDDQ Testing: With Variance Reduction
5.2.
90
333
Impact on Test Time
80
Since the previous procedure requires that all IDDQ measurements from an IC be taken before deciding whether this chip is defective or not, this may increase the length of the test. Assuming that a defect requires on average half of the IDDQ test patterns to be detected, then the total current test time ratio, RT , between a test strategy requiring all the patterns and one that stops as soon as a measurement exceeds a threshold, may be expressed as:
70
Occurrence
60 50 40 30 20 10
RT =
0 0
Fig. 8. 9.1.
0.5
1
1.5
2
2.5 3 Delta Iddq (uA)
3.5
4
4.5
5
Normalized folded Delta IDDQ histogram, drifting factor =
in Fig. 7), the estimation will include less than 2.3% of the measurements coming from the closest additional peak (this peak includes on average 30% of all the measurements), when this additional peak is centered at 8σ or more. The estimation remains close enough for our purposes even if there is overlapping of (Fig. 7) peaks, since we do not need a precise estimation. Once the standard deviation is estimated, we use it to normalize current measurements and build a new IDDQ histogram (one per IC). This normalization is done by dividing each current measurement value by the ratio of the estimated standard deviation over the average standard deviation obtained from (assumed) good ICs, a ratio called the drifting factor. In this particular case, the average standard deviation is equal to 0.063 µA [18]. Fig. 8 shows an example of a normalized IDDQ histogram (the original histogram appearing in Fig. 6). Here normalized means that the measurement values are divided by the drifting factor. It is now easier to determine whether there is a peak and what its value is. Once the highest peak in this histogram is found (using, for example, the peak search procedure described in Section 3), the (x-axis) value of this peak is multiplied by the drifting factor. In our example, the final IDDQ peak value would be 1.8 µA. To compensate for the resolution loss caused by normalization, the peak value can be more precisely estimated by calculating the average peak value from the IDDQ measurements that are part of the following interval: [FPE −2σ , FPE +2σ ], where FPE is the first peak estimation (1.8 µA in the previous example).
2 1 + Yi
(12)
where Yi is the proportion of current-tested ICs that pass the current test. Using Sematech data as an example, Yi = 0.89 (with a 5 µA limit), which leads to an RT value of 1.06, meaning that using all measurements would increase the total current test time by only 6%. This impact can be even further reduced by setting maximum IDDQ and IDDQ threshold values (much higher than the usual thresholds), such that the obvious defective ICs are rejected early. Let Ys be the proportion of current-tested ICs that would be detected before the end of the test. The RT ratio then becomes: RT =
2 − Ys 1 + Yi
(13)
In the Sematech experiment, setting early detection thresholds for IDDQ and IDDQ at 100 µA leads to a Ys value of 6% and an RT value of 1.03, meaning a relative increase of only 3% in the total current test time. As shown in Section 4.4, this increase can be largely compensated while maintaining the same test quality. 6.
Histogram-Based Test Procedure Insertion into a Production Test
In this section, we discuss the insertion of HBTP into a production test. This insertion will depend on the postprocessing capabilities of testers. Let us consider two tester categories: • testers with weak post-processing capabilities, and • testers with strong post-processing capabilities. In the first, one might try to reduce the postprocessing operations, which means using postprocessing operations only when necessary. This
334
Thibeault
option forces the identification of the ranges where IDDQ and Delta IDDQ can still be applied with a great deal of confidence. Indeed, the fact that IDDQ loses its efficiency does not necessarily mean that it cannot be applied at all, but that its application should be limited to specific ranges of IDDQ values. Let us define the following terms: • ILL (IDDQ lower limit): allows an IC to be called good if all IDDQ measurements are below this limit, and • IHL (IDDQ higher limit): allows an IC to be called defective if at least one IDDQ measurement is above this limit. So under these conditions, a chip is declared defective as soon as one IDDQ value is higher than IHL, and it is declared good if all IDDQ values are lower than ILL. In addition to functionality requirements, the IDDQ limits can take into account static power specifications (for example in the case of portable devices) or reliability issues. In the past, it was possible to have a single IDDQ threshold (= ILL = IHL) to distinguish between good and bad ICs. With technology scaling, a low-confidence range is appearing and is expected to grow. So, IDDQ is considered not to have a sufficient resolution to declare, with a great deal of confidence, whether an IC is defective or not when at least one IDDQ value is higher than ILL but when no IDDQ value is higher than IHL. In this specific situation, one can apply Delta IDDQ testing. Similar limits can be defined for Delta IDDQ : • DILL (Delta IDDQ lower limit): allows an IC to be called good if all Delta IDDQ measurements are below this limit, and • DIHL (Delta IDDQ higher limit): allows an IC to be called defective if at least one Delta IDDQ measurement is above this limit. Similarly to IDDQ , Delta IDDQ will be considered not to have a sufficient resolution to declare, with a great deal of confidence, whether an IC is defective or not when at least one Delta IDDQ value is higher than DILL but when no Delta IDDQ value is higher than DIHL. However, because of the variance reduction provided by the Delta operation, the Delta IDDQ low-confidence range is expected to be narrower than the IDDQ one.
With technology scaling, a Delta IDDQ lowconfidence range is also expected to appear and to grow. The first obvious use of HBTP would be for ICs falling in this Delta IDDQ low-confidence range, and the additional variance reduction (including process drifting effect reduction [36]) it provides should narrow the low-confidence range. Moreover, as shown in Section 4, HBTP allows the variance-test time trade-off to be exploited, by providing a better test quality (lower variance) for the same test time or by shortening the test time for the same test quality. Because of this last and unique HBTP feature, it should be used directly instead of Delta IDDQ on testers with post-processing capabilities that allow it. All the previous limits (ILL, IHL, DILL, DIHL) as well as the HBTP threshold should be established through a statistical analysis based on IDDQ measurements and other test results, at least for each technology but probably for each different IC. The definition of the Delta IDDQ limits and the HBTP threshold should first be based on the distinction between normal and defective pattern-to-pattern Delta values. In [18], we proposed using (during the characterization phase) the correlation between the occurrence of a Delta IDDQ value and the test pattern. For example, it was shown that there was a strong correlation between the Delta IDDQ 0.20 µA peak in Fig. 4 and test patterns 8–9 and 22– 23 (Sematech experiment), suggesting that this peak is caused by normal vector-to-vector variations. In the definition of such limits/thresholds, we must also take into account the presence of defects causing def of a given amplitude but passing all the other tests. This particular aspect is discussed in the next section. 7.
Defect vs. Fault
Using post-processing techniques like Delta IDDQ limits and HBTP helps to increase accuracy in the measurement of the average additional current caused by a defect, and determines with great confidence whether it is caused by an active (involving switching nodes) defect or not. Once we have a measurement that is accurate and reliable enough, we still need to know whether the corresponding def value is tolerable. In this section, we investigate this defect vs. fault relationship. We first develop a model for def distributions, based on HSpice simulations, and we compare it to def distributions built using Sematech data. Then, we revisit the Sematech data experiment, from a def distribution perspective.
Replacing IDDQ Testing: With Variance Reduction 7.1. def distribution Model
Table 5. Fault types and their probability of occurrence (POC).
An additional current can be caused by gate oxide defects, bridges or opens. These defects do not necessarily cause a (logic) fault, or they might lead to a fault that remains undetected by the other test methods due to a lack of coverage. Previous papers [37, 38] have shown that such defects may only lead to small additional delays, which might not be significant enough to cause a delay fault and to be detectable by delay testing. Moreover, a recent paper [6] suggests that highly resistive shorts, leading to such defects, are increasingly likely to occur. To explore the impact of these tendencies on def distributions, some HSpice simulations were run, based on a CMOS 0.35 µm technology. Our objective is to develop a model for def distributions that can replace the popular bimodal IDDQ distribution model (see Fig. 1), which has been shown not to be realistic [33, 39]. The model (distributions) will be developed based on external (outside gate) bridging faults only. However, it should remain valid since internal bridging faults (within gates) as well as gate oxide defects causing shorts between transistor terminals have a similar behavior. The model will be built in a probabilistic manner, taking into account the following statistics:
Fault type
• the most commonly used types of gates, and their probability of occurrence, and • the distribution of the resistive short values An example of simulated circuits appears in Fig. 9. In this particular example, a bridge between the output of an inverter and a 2-input nand gate is represented, which constitutes one type of fault. The resistive short is modeled by the Rbr resistor. A total of 15 bridging fault types were considered in these simulations (see Table 5), with each fault type being the result of adding a resistive short between the output of two gates selected among the following set of five gate types:
in1 in2 in3
n1
n4 Rbr
n5
n8
Fig. 9. Simulated circuit for def distribution estimation.
335
POC
inv-inv
0.2460
inv-nand2
0.2778
inv-nand3
0.0466
inv-nor2
0.1657
inv-nor3
0.0109
nand2-nand2
0.0784
nand2-nand3
0.0263
nand2-nor2
0.0935
nand2-nor3
0.0062
nand3-nand3
0.0022
nand3-nor2
0.0157
nand3-nor3
0.0010
nor2-nor2
0.0279
nor2-nor3
0.0037
nor3-nor3
0.0001
inverter, 2-input nand, 3-input nand, 2-input nor, and 3-input nor. These gate types were selected because they cover about 95% of the gates used in the entire ISCAS’85 benchmark [30]. For simplicity, we assume that each single gate has the same probability of being defective. Fifteen (15) different Rbr values were used (Table 6). Table 6. Rbr (k )
Rbr POC for D[40] and D[41] . D[40]
D[41]
0.2
0.5544
0.01
0.6
0.2970
0.01
1.0
0.1069
0.01
1.4
0.0026
0.01
1.8
0.0026
0.01
3
0.0130
0.05
6
0.0113
0.1
10
0.0050
0.1
14
0.0036
0.1
18
0.0036
0.1
22
0
0.1
26
0
0.1
30
0
0.1
34
0
0.1
38
0
0.1
336
Thibeault
1
1
t-s@f delay only
t-s@f delay only 0.1
0.01
Relative Occurrence
Relative Occurrence
0.1
0.001
0.0001
0.01
0.001
0.0001
1e-05
1e-05
1e-06
1e-06 0
500
Fig. 10.
1000
1500 Delta Def. (uA)
2000
2500
3000
def distributions with D[40] .
We used two different Rbr value distributions: • D[40] , a non-uniform one from 0 to 20 K matching the widely used one proposed in [40], and • D[41] , a uniform distribution between 0 to 40 K
from [6]. Table 6 shows the probability of occurrence (POC, assuming there is a bridging fault) associated with each of the 15 simulated resistor values to match D[40] and D[41] distributions. For each fault type and Rbr value combination, we verify the impact of the resistor, which can be to cause only delays (of less than 10 ns, an arbitrarily chosen limit) or a temporary stuck-at fault (t-s@f, with a delay longer than 10 ns being considered as a t-s@f ). Interestingly, the delay limit (10 ns here) does not have a significant impact on the resulting distributions. Fig. 10 shows the two resulting def distributions (one for delay only and one for t-s@f ) when using the D[40] resistive short distribution. These distributions were estimated by taking into account the probability of occurrence of each fault type and Rbr value, as well as the maximum def value caused by the resistive short. With this particular resistive short distribution, 96.1% of the shorts cause t-s@f while 3.9% cause delay only. The shorts causing t-s@f have corresponding maximum def values from 292 to 2588 µA. The shorts causing delay only have corresponding maximum def values from 142 to 1199 µA. Fig. 11 shows the two resulting def distributions when using the D[41] resistive short distribution. Now, only 4.2% of the shorts cause t-s@f while 95.8% cause delay only. Also, the range of maximum def values
0
500
Fig. 11.
1000
1500 Delta Def. (uA)
2000
2500
3000
def distributions with D[41] .
varies from 77 to 1199 µA for shorts causing delay only, while the range for shorts causing t-s@f remains the same. These results show the significant impact of the Rbr value distribution on the resulting behavior and def value. As shorts become more resistive, they cause less temporary stuck-at-faults, meaning that we rely more on current testing to detect them. The set of simulated def distributions (Fig. 10 or 11) combined with the HBTP curve in Fig. 4 suggest a trimodal def distribution after current testing (HBTP) for ICs passing all the non-IDDQ tests (symbolically represented in Fig. 12): • one for ICs without active defects (in theory, an impulse at 0), • one for ICs with active defects causing only delays, and • one for ICs with active defects causing temporary stuck-at-faults.
no active defects delay only t-s@f pdf
∆def Fig. 12. Symbolic distributions.
trimodal
def
Replacing IDDQ Testing: With Variance Reduction
1 Sematech Simulation 0.1
Relative Occurrence
0.01
0.001
0.0001
1e-05
1e-06
1e-07 0
Fig. 13.
500
1000
1500 Measure (uA)
2000
2500
3000
Simulated vs. Sematech distributions.
To validate this trimodal model, we combined the two curves in Fig. 11 into a curve representing ICs without active defects, assuming that 95% of ICs were not affected by active defects and that no temporary stuck-at fault was detected. The resulting curve appears in Fig. 13, as the simulation curve. In the same graph, another curve is plotted, resulting from applying HBTP on Sematech data, for ICs passing all the non-IDDQ tests and with a maximum def value of 3000 µA (note that the shape of this curve is not significantly affected by the fact that some ICs with temporary stuck-at faults were detected and removed from it). Both curves have a similar shape (especially if we consider that they come from different CMOS technologies), which suggests that our model is realistic, particularly in the main (overlapping) region of interest (def < 1200 µA). Besides the technology differences, the main cause of discrepancies appearing for def values greater than 1200 µA is the lack of events (simulation points and faulty ICs) in this region. The model and the simulation results provide a better understanding of the situation. Here are some observations and comments that can be made from them: • There exists a def value from where an IC should be considered faulty (that is, causing t-s@f) with a great deal of confidence. In our example (simulations), this value would be set at 1200 µA, for both considered resistive short distributions. The probability of this event (assuming there is a resistive short) is respectively equal to 46.6 and 1.5% for D[40] and D[41] distributions.
337
• There also exists a def value below which delays are the only effects of the resistive shorts (around 290 µA in our example). The probability of this event (always assuming there is a resistive short) is respectively equal to 0.4 and 60.5% for D[40] and D[41] distributions. • There are two overlapping areas. The first one, involving the no-defect and delay-only distributions, should be very limited when using HBTP (remember that it practically eliminates the 0.2 µA IDDQ peak in Fig. 4). The second one, involving the t-s@f and delay-only distributions, remains significant (from 292 to 1199 µA in our example for both Rbr distributions). The probability that a def value is within this range (once again assuming there is a resistive short) is respectively equal to 53.0 and 37.9% for D[40] and D[41] distributions. • Defects causing temporary stuck-at faults are potentially detectable by other tests (e.g., scan, delay, functional tests). The problem is that defects must be activated when test patterns cover the involved nodes and can propagate the faults to a primary output or a scan flip-flop. Therefore, some remain undetected. These observations and comments suggest that it is possible detect with a great deal of confidence the presence of active defects, but that for some def values it is difficult to distinguish between defects causing delay only and the ones leading to temporary stuck-at faults. Except perhaps for some rare cases of redundant logic, ICs with temporary stuck-at faults should be considered as faulty. As mentioned before, the additional delays caused by resistive shorts are generally small and might not be significant enough to cause a delay fault and to be detectable by delay testing. On the other hand, such resistive shorts (with or without delay violation) might pose a reliability risk [37]. Therefore, setting a def threshold remains a trade-off between yield and reliability. The resistive short distribution has a significant impact on the proportion of defects causing temporary stuck-at faults or delays, but not necessarily on the different range limits (e.g., in our example 290 and 1200 µA for both D[40] and D[41] distributions). This means that it should be possible to estimate these limits by simulations or other means. However, knowing the actual resistive short distribution allows the probability of occurrence of the corresponding events to be quantified and realistic def distributions to be built.
338
Thibeault
1
1
0.1
Relative Occurrence
Relative Occurrence
0.1
0.01
0.01
0.001
0.001 0.0001 0
1000
2000
3000 4000 5000 Delta Iddq max. (uA)
6000
7000
8000
Fig. 14. def max. distribution for ICs failing at least one nonIDDQ test.
0
Fig. 15.
1000
2000
3000 4000 5000 Delta Iddq max. (uA)
6000
7000
8000
def max. distribution for ICs failing no non-IDDQ test.
Revisiting Sematech Data: The def Perspective
Another interesting distribution (not shown) is the one obtained with ICs failing the delay test only (regardless of IDDQ , 424 ICs), where:
Fig. 14 shows a histogram (distribution, 5490 ICs, Sematech experiment) of def maximum values for ICs failing at least one non-IDDQ test (delay, scan and functional tests), such that we are sure they are faulty. Note that the peak around 8 mA is caused by the fact that the currents were clamped at this value during the Sematech experiment. This distribution reveals that:
• 1.4% of these ICs have a def maximum value greater than or equal to 8 mA, • 4.7% of these ICs have a def maximum value greater than or equal to 1 mA, • 10.1% of these ICs have a def maximum value greater than or equal to 20 µA.
7.2.
• 33.8% of these faulty ICs have a def maximum value greater than or equal to 8 mA, • 67.9% of these faulty ICs have a def maximum value greater than or equal to 1 mA, • 79.5% of these faulty ICs have a def maximum value greater than or equal to 20 µA. These statistics reveal that most faults act as active ones and cause a significant def value. Drawing a similar histogram for ICs failing no non-IDDQ test (12964 ICs) leads to Fig. 15 and the following statistics:
These statistics show that most of the faulty ICs passing the scan test and the functional test but failing the delay test do not have a significant def maximum value, and that only a few with a significant def maximum value would be detected by the delay test. This particular last class of ICs is of interest because they are likely to be affected by highly resistive shorts. According to the previous statistics, only 7% of the ICs passing the scan test and the functional test and with a def maximum value greater than or equal to 1 mA would have been detected by the delay test. In summary, the previous results show or confirm that:
• 0.2% of these ICs have a def maximum value greater than or equal to 8 mA, • 2.0% of these ICs have a def maximum value greater than or equal to 1 mA, • 6.6% of these ICs have a def maximum value greater than or equal to 20 µA.
• a large proportion of defective ICs have a significant def maximum value, • some ICs, which are very likely to be defective, are only detected by means of current testing, and • highly resistive shorts are not likely to be detected by delay test.
Without Delta IDDQ (or HBTP), these ICs would have been considered as defect-free. Obviously, Delta IDDQ (or HBTP) helps to reduce test escape.
Therefore, current testing, namely Delta IDDQ and HTBP, remains the most, if not the only, efficient way of detecting resistive shorts.
Replacing IDDQ Testing: With Variance Reduction
8.
Next Challenges for Current Testing
All the previous results presented in this paper suggest that HBTP should provide reliable estimation of the additional amount of current caused by a defect or fault for at least the one or two additional IC generations (with respect to IDDQ ). In addition to the expected IDDQ growth, intra-die process variations, which are appearing, represent one of the next challenges of current testing methods in the estimation of def values. The margin provided by HBTP and used in Section 4 for shorter time or better test quality, as well as the IDDQ partitioning concept, are key elements for tackling this challenge. Intra-die process variations will require a reduction in switching activities from one test pattern to another in order to reduce their impact, by limiting switching to small local areas. It might also require more test patterns to compensate for their effect. Fortunately, a new technique has been developed, allowing IDDQ measurements to be sped up [41] (when compared to the fastest alternative [42]). The increased occurrence of higher parasitic resistor values, suggested by [6], constitutes another challenge, leading to smaller def values. This amplifies the def resolution requirements and makes threshold setting more difficult. This particular aspect is part of our future work. 9.
Conclusion
This paper has reported the latest results of our ongoing effort to suitably replace IDDQ testing with variance reduction post-processing techniques applied at the IC-to-IC, vector-to-vector and measurement levels. It has explained how variance reduction can help increase test quality. A histogram-based test procedure (HBTP), based on Delta IDDQ , has been presented. We have shown how HBTP could help optimize test resources by the use of test vectors with a shorter settling time, leading to a better test quality and/or a reduced total current test time. The concept of Delta IDDQ partitioning has also been presented, which helps to amplify the effect of test vectors with a shorter settling time. We have also discussed practical aspects related to testing. First, we proposed a simple and efficient way to deal with the process drifting and resolution loss effect, using HBTP. Then, we proposed a way of inserting HBTP into a production test. Finally, we investigated the relationship between defect and fault: we proposed a new distribution model to replace the bimodal
339
one, and we reexamined Sematech data from a def perspective. All the results confirmed the importance of current testing and the pertinence of the variance reduction strategy on which our current test approach is based. They also revealed possibilities in terms of test optimization, targeting either a better test quality or a shorter test time. Our new distribution model provides a better understanding of the defect vs. fault relationship and should greatly help in the setting of thresholds and limits. Reconsidering Sematech data allowed the necessity of current testing to be reconfirmed.
Acknowledgments This research has been supported in part by the Natural Science and Engineering Research Council of Canada, the Fonds pour la Formation de Chercheurs et l’Aide a` la Recherche of Québec, and by Canadian Microelectronics Corporation. The author wants to thank Dr. Phil Nigh for access to the Sematech data, and the reviewers for their comments and suggestions.
Note 1. This data comes from the work of the Test thrust at SEMATECH, Project S121. The analysis here is the work of this university, and the conclusions are our own and do not necessarily represent the views of SEMATECH or its member companies.
References 1. K. Baker, “SIA Roadmaps: Sunset Boulevard for IDDQ ,” Int. Test Conf., 1999, p. 1121. 2. D. Bhavsar, “ITC99 Panels,” IEEE Design & Test, vol. 16, no. 4, Oct.–Dec. 1999, pp. 96–99. 3. C.F. Hawkins and J.M. Soden, “Deep Submicron CMOS Current IC Testing: Is There a Future?,” IEEE Design & Test, vol. 16, no. 4, pp. 14–15, Oct.–Dec. 1999. 4. T. Williams, R. Dennard, R. Kapur, M, Mercer, and W. Maly, “Iddq Test: Sensitivity analysis of scaling,” in Int. Test Conf., Oct. 1996, pp. 786–792. 5. M.L. Bushnell and V.D. Agrawal, Essential of Electronic Testing, Norwell, MA: Kluwer Academic Publishers, 2000. 6. M. Spica, M. Tripp, and R. Roeder, “Determining Bridge Defect Resistances from Correlating Inductive Fault Analysis Predictions to Empirical Test Results,” Int. Workshop on Defect Based Testing, pp. 11–16, 2001. 7. M. Lewitt, “ASIC Testing Updated,” IEEE Spectrum, vol. 29, no. 5, pp. 26–29, May 1992. 8. P. Nigh and W. Maly, “Test Generation for Current Testing,” IEEE Design and Test, vol. 7, pp. 26–38, Feb. 1990.
340
Thibeault
9. S. Chakravarty and M. Liu, “Algorithms for Current Monitor Based Diagnosis of Bridging and Leakage Faults,” in Proc. DAC-92, Anaheim, CA, June 1992, pp. 353–356. 10. D. Burns, “Locating High Resistance Shorts in CMOS Circuits by Analyzing Supply Current Measurement Vectors,” in Proc. ISTFA-89, Los Angeles, CA, Nov. 1989. 11. R. Aitken, “Diagnosis of leakage faults with IDDQ ,” Journal of Electronic Testing: Theory and Applications, vol. 3, no. 4, pp. 367–375, 1992. 12. R. Rajsuman, “Iddq Testing for CMOS VLSI,” in Proc. of IEEE, vol. 88, no. 4, pp. 544–566, April 2000. 13. E.I. Cole Jr. et al., “Transient Power Supply Voltage Analysis for Detecting IC defects,” in Int. Test Conf., 1997, pp. 23–31. 14. M. Sachdev, V. Zieren, and P. Janssen, “Defect Detection with Transient Current Testing and its Potential for Deep Sub-Micron ICs,” in Int. Test Conf., 1998, pp. 204–213. 15. B. Krusemen, P. Janssen, and V. Zieren, “Transient Current Testing of 0.25 µm CMOS Devices,” in Int. Test Conf., 1999, pp. 47–56. 16. J.F. Plusquellic, D.M. Chiarulli, and S.P. Levitan, “Digital Integrated Circuit Testing Using Transient Signal Analysis,” in Int. Test Conf., 1996, pp. 481–490. 17. B. Vinnakota, W. Jiang, and D. Sun, “Process-Tolerant Test with the Energy Consumption Ratio,” in Int. Test Conf., 1998, pp. 1027–1036. 18. C. Thibeault, “An Histogram Based Procedure for Current Testing of Active Defects,” in Int. Test Conf., 1999, pp. 714– 723. 19. A.C. Miller, “IDDQ Testing in Deep Submicron Integrated Circuits,” in Int. Test Conf., 1999, pp. 724–729. 20. P. Maxwell et al., “Current Ratios: A Self-Scaling Implementation Current Signatures for Production IDDQ Testing,” Int. Test Conf., 1999, pp. 738–746. 21. S. Jandhyala et al., “Clustering Based Techniques for IDDQ Testing,” Int. Test Conf., 1999, pp. 730–737. 22. C. Thibeault, “Detection and Location of Faults and Defects Using Digital Signal Processing,” in IEEE VLSI Test Symp., 1995, pp. 262–267. 23. C. Thibeault, “On the Comparison of IDDQ and IDDQ Testing,” in IEEE VLSI Test Symp., Dana Point, CA, 1999, pp. 143–150. 24. Y. Okuda, “DECOUPLE: Defect Current Detection in Deep Submicron IDDQ ,” in Int. Test Conf., 2000, pp. 199–206. 25. P.N. Varyiam, “Increasing the IDDQ Test Resolution Using Current Prediction,” in Int. Test Conf., 2000, pp. 217–224. 26. W.R. Daasch, J. McNames, D. Bockelman, K. Cota, and R. Madge, “Variance Reduction Using Wafer Patterns IDDQ Data,” in Int. Test Conf., 2000, pp. 189–198. 27. A. Gattiker and W. Maly, “Current Signatures,” in IEEE VLSI Test Symp., 1999, pp. 112–117. 28. R.C. Dorf, Modern Control System, 6th edition, Reading, Massachusetts: Addison-Wesley, 1992. 29. C. Thibeault, “A Novel Probabilistic Approach for IC Diagnosis Based on Differential Quiescent Current Signatures,” in IEEE VLSI Test Symp., 1997, pp. 80–85.
30. C. Thibeault and L. Boisvert, “Diagnosis Method Based on Delta Iddq Probabilistic Signatures: Experimental Results,” in Int. Test Conf., 1998, pp. 1019–1026. 31. P. Nigh, W. Needham, K. Butler, P. Maxwell, and R. Aitken, “An Experimental Study Comparing the Relative Effectiveness of Functional, Scan, Iddq , and Delay-Fault Testing,” in IEEE VLSI Test Symp., 1997, pp. 459–463. 32. P. Nigh, W. Needham, K. Butler, P. Maxwell, R. Aitken, and W. Maly, “So What is an Optimal Test Mix? A Discussion on the Sematech Methods Experiment,” in IEEE Int. Test Conf., 1997, pp. 1037–1038. 33. P. Nigh and A. Gattiker, “Test Method Evaluation Experiments and Data,” in Int. Test Conf., 2000, pp. 454–463. 34. A.H. Bowker and G.J. Liebermann, Digital Signal Processing, 2nd edition, New Jersey: Prentice-Hall, 1972. 35. C. Crapuchettes, “Testing CMOS IDD on Large Devices,” in Int. Test Conf., 1987, pp. 310–315. 36. C. Thibeault, “Improving Delta-IDDQ -based Test Methods,” in Int. Test Conf., 2000, pp. 207–216. 37. H. Hao and E.J. McCluskey, “Resistive Shorts within CMOS Gates,” Int. Test Conf., 1991, pp. 292–301. 38. H.T. Vierhaus, W. Meyer, and U. Gl¨aser, “CMOS Bridges and Resistive Transistor Faults: IDDQ versus Delay Effects,” in Int. Test Conf., 1993, pp. 83–91. 39. J. Figueras and A. Ferr´e, “Possibilities and Limitations of IDDQ Testing in Submicron CMOS?,” IEEE Trans. on Components, Packaging and Manufacturing Technology—Part B, vol. 21, no. 4, pp. 352–359, Nov. 1998. 40. R. Rodriguez-Montanes, E.M.J.G. Bruls, and J. Figueras, “Bridging Defects Resistance Measurements in a CMOS Process,” in IEEE Int. Test Conf., 1992, pp. 892–899. 41. C. Thibeault, “VDDQ Integrated Circuit Testing System and Method,” US and International Patent Pending. 42. K.M. Wallquist, A.W. Righter, and C.F. Hawkins, “A General Purpose IDDQ Measurement Circuit,” in Int. Test Conf., 1993, pp. 642–651. 43. Hewlett Packard, “Measuring CMOS Quiescent Power Supply Current with HP 82000,” Application Note 398–3.
Claude Thibeault received the B.Eng. degree in unified engineering from Universit´e du Qu´ebec a` Chicoutimi in 1986 and the Ph.D. de´ gree in electrical engineering from Ecole Polytechnique de Montr´eal in 1991. The same year, he started his career with the Department of Mathematics and Computer Science of Universit´e du Qu´ebec a` Montr´eal. In 1993, he joined the Electrical Engineering Department ´ of Ecole de technologie sup´erieure, Montr´eal, Canada, where he serves now as a full professor. His research interests include test and diagnosis of ICs, ASIC and FPGA design for different applications including telecommunications and video, and defect tolerance. He has also served as an ASIC/FPGA design and test consultant for Soci´et´e G´en´erale de Financement du Qu´ebec and for different companies such as Hyperchip, Domosys, CMC Electronic and Sensio.