J Comput Electron DOI 10.1007/s10825-014-0630-8
Numerical analysis method of heat transfer in an electronic component using sensitivity analysis Otmane Souhar · Christophe Prud’homme
© Springer Science+Business Media New York 2014
Abstract Numerical models applied to real-life engineering processes have many input parameters that are not always known with a sufficient degree of certainty. Moreover, models are characterized by their mathematical complexity and time-consuming calculations. In this context, our certified reduced basis (crb) model is developed taking into account the input uncertainties which produce many uncertain and different outputs. In order to assess more precisely the relative importance of input parameters on the model outputs, we complete and use a quantitative sensitivity analysis which is a crucial tool in the development and evaluation of complex models. In the current paper, the parametrized crb method is introduced and applied successfully for a simplified 2D heat transfer of electronic components. Furthermore, different sensitivity analysis methods have been compared; Specifically, we have used linear sensitivity methods, one factor at a time design, and global techniques such as Morris one factor at a time, Sobol’s methods and Monte Carlo sampling coupled with scatter plot and correlation analysis. Keywords analysis
Heat transfer · crb method · Sensitivity
1 Introduction The use of computational modeling within engineering applications becomes a popular design tool. This modeling O. Souhar (B) Faculty of Sciences, LIMA Laboratory, MCS Team, University of Chouaib Doukkali, BP. 20, 24000 El Jadida, Morocco e-mail:
[email protected] C. Prud’homme Jean Kuntzmann Laboratory , University of Joseph Fourier, 51 rue des Mathématiques, 38041 Grenoble Cedex 9, France
requires input parameters and variables aimed to characterize the processes under investigation; However, input values are subject to many sources of uncertainties including errors of measurement, absence of information and poor or partial understanding of the driving forces and mechanisms. This places limits on our confidence in the response (output) of the model. Sensitivity analysis (SA) is used to study how a variation in the output of a model can be apportioned to different variations in input of the process being investigated. This is an important technique for checking the performance of a given model. Different methods can be used to do a SA; Overall these methods may be grouped into three classes: screening designs (SD), local sensitivity analysis (LSA) and global sensitivity analysis (GSA). These classes are well described by Saltelli et al. [25]. SD is the simplest method to carry out a SA. It can be used to identify the parameter subset that controls most of the output variability (with low computational effort). It is based on the experience that often only a few parameters have a significant effect on the model outputs, e.g. the one factor at a time approach. LSA class concentrates on the local impact of the inputs on the model outputs. LSA is usually carried out by computing/approximating partial derivatives of the output functions with respect to the input variables [29]. Automatic differentiation (AD) has become a popular technique to compute derivatives, thus it has become a technique for SA and uncertainty propagations of several models. Recently, different types of AD software have been developed and they are available for non-commercial use free of charge, e.g., ADIFOR [3], TAM [5], ADOL [28]. GSA, is a quantitative study, based on multiple model realizations and aims to estimate the fractional contribution of each input factor in relation to the variability of the quantity of interest (output), as well as to account for interaction terms [22,26,27,30].
123
J Comput Electron Table 1 The parameter descriptions of the parametrized crb model and their ranges of variation
Name
Description
Nominal value
Range
Units
Parameters t
Time
[0, 1500]
s
Q
Heat source
106
[0, 106 ]
W · m −3
Thermal conductivity
2
[0.2, 150]
W · m −1 · K −1
[10−1 , 102 ]
W · m −2 · K −1
IC parameters k1 = k2 = k I C r13 = r23 = r
Thermal conductance
100
ρC I C
Heat capacity
1.4 · 106
J · m −3 · K −1
eI C
Thickness
2 · 10−3
m
hIC = L IC
Height
2 · 10−2
m
h1
Height
2 · 10−2
m
Height
7 · 10−2
m
k3 = k PC B
Thermal conductivity
0.2
W · m −1 · K −1
ρC3
Heat capacity
2 · 106
J · m −3 · K −1
e PC B
Thickness
2 · 10−3
m
h PC B
Height
13 · 10−2
m
T0
Inflow temperature
300
D
Inflow rate
7 · 10−3
k4
Thermal conductivity
3 · 10−2
ρC4
Heat capacity
1100
Thickness
4 · 10−3
h2 PCB parameters
Air parameters
ea
Three techniques of this last class are widely used: response surface methodology (RSM), Fourier amplitude sensitivity test (FAST), and Monte Carlo method (MCM). In the current paper, we investigate essentially the impact of model input parameters on outputs by using the one factor at a time technique, the linear sensitivity, the Monte Carlo, the Morris one factor at a time and the Sobol’s sensitivity methods. However, the application of such methods involve multiple runs of the model that yields it very expensive in computation time. In order to reduce the computation time, the Latin hypercube design has been used to sample the input parameter values of the model. The goal of such design is to create a matrix of parameter values that minimized their correlations [8,14]. The study case, presented in Sect. 4.1, is a 2D model representative of the neighboring of an electronic component submitted to a cooling air flow described by four geometrical domains in IR 2 , where two dissipative components are embedded. Engineers must then control the temperature reached by sensitive components inside the equipment related to five uncertain inputs described in Table 1. The parametrized certified reduced basis (crb) model, developed to conduct simulations and described in the next section, permits rapid and reliable approximation of an inputparametrized partial differential equation (PDE) in the limit
123
K [5 · 10−4 , 10−2 ]
m 2 · s −1 W · m −1 · K −1 J · m −3 · K −1
[2.5 · 10−3 , 5 · 10−2 ]
m
of a finite complexity. Where, the problem written under the weak form is: find u(μ) in an Hilbert space V such that a(u(μ), v; μ) = f (v; μ), ∀v ∈ V,
(1)
with u is the weak solution which depends on the input parameter μ. The aim of crb methods is a fast online-phase, in which for a newly given parameter μ, it is based on a Galerkin approach – projection on a space W N spanned by the solutions of parametrized PDEs selected at N points in the parameter space– with rapidly convergent and a priori and a posteriori error estimations [1,12,16,20,21].The method has been applied on different problem classes, with stationary, linear problems with affine dependence on the parameter on the one end [6,18] and lately also to non-linear, non-stationary systems of partial differential equations on the other end [2,4] and to other problems [11,23,31]. This paper is organized as follows. Section 2 gives an overview of the crb method used to the implementation of our model. Section 3, describes and introduces a framework used to do the sensitivity analysis methods. Section 4 describes our application and numerical results. The conclusion is presented in the last Section.
J Comput Electron
2 Certified reduced basis method
The reduced basis approximation reads then as follows. For all μ ∈ D, find s N (μ) = (u N (μ)), where u N (μ) ∈ W N is solution of
2.1 Overview We consider a regular domain Ω ⊂ IR d , d = 1, 2 or 3, and associated functional space V ⊂ H 1 (Ω), where H 1 (Ω) = {v ∈ L 2 (Ω), ∇v ∈ (L 2 (Ω))d }, and L 2 (Ω) is the space of square integrable function over Ω. The scalar product and norm associated to V are given by ( · , · )V and · V = ( · , · )1/2 , respectively. We define also a set of parameters D ∈ IR p , whereof a particular point is denoted μ. Note that Ω does not depend on the parameters in the sense that Ω is not parametrized topologically. We introduce now a bilinear and linear forms:
a(u N (μ), v; μ) = (v), ∀ v ∈ W N .
a : V × V × D → IR,
Theoretical results show that N should be small, in practice N = O(10). We develop hereafter two main procedures, offline and online computations which exploit this dimension reduction. The crb solution u N (μ) is in W N , we then express u N (μ) as
f : V → IR,
: V → IR.
We shall assume that a is continuous, a(w, v; μ) ≤ γ (μ) wV vV ≤ γ0 wV vV , ∀μ ∈ D. We further assume that a, f and depend affinely on the parameters; in particular, we suppose that for a finite integer Q, a can be expressed as follows. a(w, v; μ) =
Q
The reduced basis approximation method consists to evaluate the influence of the uncertain inputs on the model output, more specifically, the output is expressed as a functional of a field variable that is the solution of an input-parametrized partial differential equation. 2.3 Computational procedure
u N (μ) = σ (μ) a (w, v), ∀ w, v ∈ V, ∀ μ ∈ D, q
q
q=1
(2) with σ q : D → IR and a q : V × V → IR, q = 1, . . . , Q. Similarly for f and although, in what follows, they won’t depend on μ. Note that, this hypothesis on a, f and is crucial to obtain real-time predictions and a large class of problem admits such a decomposition. Our problem under abstract form reads then as follows. For all μ ∈ D, find s(μ) ∈ IR given by s(μ) = (u(μ)),
(3)
where u(μ) ∈ V is solution of a(u(μ), v; μ) = f (v), ∀ v ∈ V.
(4)
In the language of the introduction, a is the partial differential equation under weak form, u(μ) is our solution and s(μ) the output.
(5)
N
u N j ζ j = (u N (μ))T ζ ,
(6)
j=1
where u N (μ) ∈ IR N . We choose for the test functions v = ζi , i = 1, ..., N ; Inserting these representations into (5), we then obtain the desired algebraic representation for u N (μ) ∈ IR N . A N (μ)u N (μ) = F N ,
(7)
where A N (μ) ∈ IR N ×N is a symmetric positive definite matrix with entries A N i, j (μ) = a(ζ j , ζi ; μ); 1 ≤ i, j ≤ N and F N i = f (ζi ), i = 1, ..., N . And we evaluate then our output as s N (μ) = F TN u N (μ). We now invoke the affine decomposition (2) to write A N i, j (μ) = a(ζ j , ζi ; μ) =
Q
σ q (μ) a q (ζ j , ζi )
(8)
q=1
which is equivalent to A N (μ) =
Q
q
σ q (μ)A N ,
(9)
q=1
2.2 Parametrized reduced basis approximation We introduce the sampling of the parameter space S N = {μ1 , . . . , μ N , μi ∈ D}. The construction of this sampling can be done in several ways. Then we construct the reduced basis space W N = span{ζn ≡ u(μn ), n = 1, . . . , N }, where u(μn ) ∈ V is the solution of (4) for μ = μn . In practice u(μn ) is replaced by a truth finite element approximation such that we can equate the finite element solution with the exact one [19].
where the A N ∈ IR N ×N are given by A N i, j = a q (ζ j , ζi ); 1 ≤ i, j ≤ N , 1 ≤ q ≤ Q. The offline and online decompositions are now clear: q
q
i. In the offline stage, we compute u(μn ) and construct the q A N and F N . This requires N (expensive) a finite element solutions and O(Q N 2 ) scalar products; ii. In the online for any given μ ∈ D we form A N from (8), then solve (7) for u N (μ), and finally s N (μ) =
123
J Comput Electron
F TN u N (μ): this requires O(Q N 2 ) + O( 23 N 3 ) operations and O(Q N 2 ) storage.
assumed to involve the construction of a linear model of the form: p
Regarding the a posteriori error estimation, rigorous output bounds for s(μ) have been developed under the form
Yk = b0 +
s N (μ) ≤ s(μ) ≤ s N (μ) + Δ N (μ) ∀μ ∈ D,
where b j are the ordinary regression coefficients (ORC); εk = Yk − Yˆk is the regression residual; and Yˆk is the regression-estimate of the model output for k − th model run. Furthermore, Yˆk represents the best linear prediction. To determine the ORCs b j one can use the least squares approach. In the case when the model parameters have large variances, the X T X matrix, used to compute b j , is ill-conditioned the value of its determinant is nearly 0, and attempts to calculate the inverse of the matrix run into numerical snags with uncertain final values. Thanks to Ridge regression [13] which is a variant of ordinary linear regression, where its goal is to circumvent the problem of predictors collinearity. It gives-up the least squares (LS) as a method for estimating the parameters of the model, and focuses instead of the X T X matrix. This matrix will be artificially modified so as to make its determinant appreciably different from 0. By doing so, it makes the new model parameters somewhat biased (whereas the parameters as calculated by the LS method are unbiased estimators of the true parameters). The coefficient of determination is defined as,
(10)
where Δ N (μ) is an upper bound for |s(μ) − s N (μ)|. s N is indeed a lower of s by applying symmetry and coercivity of a, as well as the Galerkin orthogonality. Let u N (μ) and u N N (μ) the solutions of (4) and (5), respectively then the error e(μ) = u N (μ) − u N N (μ) ∈ V satisfies a(e(μ), v; μ) = r (v; μ) ∀v ∈ V,
(11)
where r (v; μ) ∈ V is the residual (V is the dual space of V ), r (v; μ) = f (v; μ) − a u N N (μ), v; μ
∀v ∈ V,
(12)
Using the Riesz representation of r (v; μ) there exists e(μ) ˆ ∈ V such that (e(μ), ˆ v)V = r (v; μ) ∀v ∈ V,
(13)
The error estimator in the output is then Δ N (μ) =
2 e(μ) ˆ V , α L B (μ)
3 Sensitivity analysis 3.1 Linear sensitivity coefficient 3.1.1 Ordinary regression coefficient Regression analysis provides an algebraic relationship between model parameters X j , 1 ≤ j ≤ p and a model output Y . A multivariate sample of input X is generated by some sampling strategy (dimension m × p), and the corresponding sequence of m output values is computed using the model under investigation. Regression analysis is usually
123
j=1
m
(14)
with α L B (μ) is a lower bound of the coercivity constant and where its computation is a non-trivial task. α L B (μ) requires the efficient computations of the lowest and the greatest eigenvalues of a generalized eigenproblem involving singular matrices; strong difficulties may be encountered to solve such problems. For more detailed description and results of the reduced basis approximation method readers are invited, for example, to [12,16,18,21].
b j .X jk + εk = Yˆk + εk ; k = 1, ..., m (15)
R2 =
k=1 m
Yˆk − Y¯ Yk − Y¯
2 2
.
(16)
k=1
R 2 provides a measure of the extent to which the regression model can match the observed data. More specifically, if R 2 is close to 1 then the linear relationship is strong. However, when additional parameters are added to the regression equation, the value of R 2 increases even when these parameters do not significantly improve the regression equation [9]. The adjusted R 2 can be more appropriate [10]: 2 2 Rad j = 1 − (1 − R ) ×
m−1 , m− p−1
(17)
By modifying R 2 by the number of parameters used in the regression equation, the adjusted R 2 is more likely to only increase if the new parameter results in an improve regression model. The ORCs are not very useful in sensitivity analysis because each coefficient is influenced by the units of X j and also does not incorporate any information on the distribution assigned to X j [7]. Consequently, the regression models in Eq. (5) are
J Comput Electron
usually reformulated as a standardized or normalized regression, see for example the book of Saltelli et al. [25]. 3.1.2 Standardized regression coefficient Model inputs have, in general, variables and parameters with different physical units, and therefore the ordinary regression coefficients can not be compared with each other. The standardized regression coefficients (SRC) can be used to avoid this drawback. The SRCs can be obtained by standardizing the quantities in Eq. (15). Then it can be written as:
3.2 Linear correlation coefficients The widely used measure of the strength of the linear relationship between the input parameter and the output Y is the linear correlation coefficient (LCC) ρ(X j , Y ) which can be expressed by: m
X k j − X¯ j
Yˆk − Y¯ = Sˆ
bsj ×
j=1
X j − X¯ j , Sˆj
(18)
Yk − Y¯
k=1
ρ(X j , Y ) = , m 2 2 m X k j − X¯ j Yk − Y¯ k=1
p
where X¯ j =
m Xk j
(22)
k=1
and Y¯ =
m Yk
. m m k=1 The LCC ranges between −1 and +1. Values close to either these extremes indicate a strong influence, while small absolute values indicate little or no influence. The LCC is a relative sensitivity, it quantifies the relative change ΔY of Y , in terms of its standard deviation SY . k=1
ˆ and [ X¯ j , Sˆj ] are the averages and standard deviations [Y¯ , S] of (Y1 , ..., Ym ) and (X 1 j , ..., X m j ), respectively; and are bsj the SRCs, respectively. These regression coefficients may be computed easily from the ordinary regression coefficients, b j , as follows. bsj = b j ×
3.3 Morris one factor at a time method
Sˆj , Sˆ
(19)
bsj ,
The SRC, can be used as relative sensitivity. It measures the effect of moving each input parameter away from its mean by a fixed fraction of its standard deviation, while the other i )/( ΔX )). parameters remain constant (i.e., ( ΔY ˆ ˆ S
Si
3.1.3 Normalized regression coefficient The normalized regression coefficients (NRC) can be obtained by normalizing the Eq. (15) by use of the corresponding averages. Thus, it can be written as: Xj Y bnj × , = ¯ Y X¯ j j=1 p
(20)
where bnj are the NRCs. These regression coefficients may be computed easily from the ordinary regression coefficients, b j , as follows. bnj = b j ×
X¯ j . Y¯
(21)
The NRC, bnj , is a relative sensitivity which measures the relative change ΔY/Yˆ of Y due to relative change of ΔX j / Xˆ j , while the other parameters remain constant. The main drawback of the normalized regression method is when X¯ j vanishes or is close to zero for some parameters 1 ≤ j ≤ p.
The one factor at a time analysis method, developed by Morris [15], examines the change in output as each input parameter is individually perturbed. Thus, it determines parameters with linear effects and nonlinear interactions. In the MOAT method, the elementary effect defined by (23) Mi (X 1 , ..., X p , Δ) Y (X 1 , ..., X i + Δ, ..., X p ) − Y (X 1 , ..., X i , ..., X p ) = Δ is calculated, where Δ is a value between 1/(q − 1) and 1 − 1/(q − 1), in practice this value is rescaled and predetermined multiple of 1/(q − 1) from the domain variation. The method samples values of X from the hyperspace (identified by an p-dimensional q-level grid) and finally calculates mean (μ, assessing the overall influence of the factor on Y ) and standard deviation (σ , estimating the totality of the higher order effects, i.e., nonlinearity or interactions with other factors) of all the Mi obtained for each factor. μ and σ are calculated over different trajectories, consisting in individual, one factor at a time, experiments. The total number of model evaluations needed is then t ( p + 1), where t is the number of trajectories. 3.4 Sobol’s sensitivity indices Sobol’s method is a global sensitivity analysis technique which is based on an analysis of the variance of the model output; its theoretical basis is the well known variance decomposition theorem. Sobol sensitivity indices are ratios of partial
123
J Comput Electron
variances to total variance, which can be written as, Si = SiT =
Vi j Vi jk Vi , Si j = , Si jk = , V V V
p Vi + k=1,k=i (Vik + · · · + V12... p )
...
(24)
(25) V where V = var(Y ), Vi = var[IE(Y |X i )], Vi j = var[IE(Y | X i , X j )] and so on. Si is called the main or first-order effects for parameter X i , it represents the sensitivity of the output to changes in input X i alone; Si j are the second order pure interaction terms X i and X j , and SiT is the total order sensitivity index for parameter X i , i.e. it is the contributed X i and interactions of X i with other parameters. These sensitivity indices are not negative and not exceed one. Sobol’s method appears then to be clearly superior to traditional sensitivity methods such as local methods that examine parameters one factor at a time when considering cases where the assumption of linearity is invalid invalid [24] (Table 1).
4 Numerical experiment: cooling of electronic components, a simplified 2D heat transfer model 4.1 Problem description The following used benchmark, Fig. 1, is proposed by eads iw [17] where a detailed description of the application is given. 4 Ωi is the solution The temperature T of the domain Ω = ∪i=1 of the heat transfer equation: ρCi
∂T + v.∇T ∂t
− ∇.(ki ∇T ) = Q i , i = 1, ..., 4 (26)
where t is time, ρCi is the volumic thermal capacity, ki is the thermal conductivity and Q i is the volumic heat dissipated. Integrated circuits (ICs) (domains Ω1 and Ω2 ) are respectively soldered on PCB at x1 = (e PC B , h 1 ) and x2 = (e PC B , h 2 ). They are considered as rectangles with width e I C and height h I C . The printed circuit board (PCB) is a rectangle Ω3 of width e PC B and height h PC B . The air is flowing along the PCB in domain Ω4 . Speed in the air channel Ω4 is supposed to have a parabolic profile function of x coordinate. Its expression is simplified as follows (notice that v is normally solution of Navier-Stokes equations; the resulting temperature and velocity fields should be quite different from that simplified model). We have for all 0 ≤ y ≤ h PC B e PC B + e I C ≤ x ≤ e PC B + ea , (27)
2 3 x − ((ea + e I C )/2 + e PC B ) f (t) y v= D 1− 2(ea − e I C ) (ea − e I C )/2 e PC B ≤ x ≤ e PC B + e I C , v = 0
123
4 Ω with ∂Ω = Fig. 1 Geometry of the integrated circuits Ω = ∪i=1 i 4 Γ ∪i=1 i
where f is a function of time modeling the starting of the PCB ventilation, i.e. t f (t) = 1 − exp(− ), 3
(28)
D is the air flow rate, and y = (0, 1)T is the unit vector along the y axis. A quick verification shows that v·n = vy = D (29) Γ4 ∩Ω4
Γ4 ∩Ω4
The medium velocity vi = 0, i = 1, 2, 3 in the solid domains Ωi , i = 1, 2, 3. ICs dissipate heat, we have respectively Q 1 = Q 1 − exp(−t) Q 2 = Q 1 − exp(−t)
in Ω1 in Ω2
(30)
where Q is the heat source. We shall denote n |Ωi = n i the unit outward normal to Ωi and n |Ω j = n j the unit outward normal to Ω j . Boundary conditions – On Γ3 , a zero flux (Neumann) condition −k3 ∇T · n 3 = −k4 ∇T · n 4 = 0;
(31)
J Comput Electron
(a) Temperature in K
Fig. 2 Scatter plot showing Monte Carlo simulation and illustration of linear correlation coefficients: a ρ(K I C , S1 ) = −0.05; b ρ(D, S1 ) = −0.28; c ρ(Q, S1 ) = 0.78; d ρ(r, S1 ) = −0.03; e ρ(ea , S1 ) = 0.45
(b) 600
600
500
500
400
400
300
0
50
100
150
300
0
0.002
Conductivity K
0.006
0.008
0.01
80
100
Inflow rate D
IC
(c) Temperature in K
0.004
(d) 600
600
500
500
400
400
300
0
2
4
6
8
Heat source Q
10 5
x 10
300
0
20
40
60
Thermal conductance r
Temperature in K
(e) 600 500 400 300
0
0.01
0.02
0.03
0.04
0.05
Thikness e
a
– On Γ4 , (0 ≤ x ≤ e PC B + ea , y = 0) the temperature is set (Dirichlet condition) T = T0 ;
(32)
– Between Γ1 and Γ2 , periodic conditions Tx=0 = Tx=e +e a PC B −k3 ∇T · n 3 = k4 ∇T · n 4 x=e +e ; a PC B x=0
(33)
– At interfaces between the ICs and PCB, there is a thermal contact conductance: −k1 ∇T · n 1 −k3 ∇T · n 3 = r13 T∂Ω1 − T∂Ω3 (34) −k2 ∇T · n 2 −k3 ∇T · n 3 = r23 T∂Ω2 − T∂Ω3 ; – On other internal boundaries, the continuity of the heat flux and temperature, on Γi j = Ωi ∩ Ω j = ∅ Ti = T j ki ∇T · n i = −k j ∇T · n j . Initial condition At t = 0s, we set T = T0 .
(35)
4.2 Uncertainties in input data and output Five uncertain parameters are considered herein. They concern the thermal conductivity K I C , the heat source Q, the inflow rate D, the thermal conductance r and the thickness ea . Table 1 summaries the definition, the nominal value, the uncertainty range, and the unit for each parameter. Two outputs are considered. The mean temperature S1 (μ) of the hottest IC: 1 T dT, (36) S1 (μ) = e I C h I C Ω2 and the mean temperature S2 (μ) of the air at the outlet IC: 1 S2 (μ) = T dT. (37) ea Ω2 ∩Γ3 4.3 Sensitivity analysis results Simulations are performed with the parametrized crb model. The scenario chosen for this study is a simplified 2D heat transfer in an electronic component. At first time, the method of Monte Carlo analysis is performed with 1,000 model runs where the Latin hypercube design is used to sample the input parameter values. Thus,
123
J Comput Electron
(a) Temperature in K
Fig. 3 Scatter plot showing Monte Carlo simulation and illustration of linear correlation coefficients: a ρ(K I C , S2 ) = −0.06; b ρ(D, S2 ) = −0.55; c ρ(Q, S2 ) = 0.54; d ρ(r, S2 ) = 0.05; e ρ(ea , S2 ) = 0.11
(b)
450
450
400
400
350
350
300
300 0
50
100
150
0
0.002
Conductivity K IC
(c) Temperature in K
0.004
0.006
0.008
0.01
80
100
Inflow rate D
(d)
450
450
400
400
350
350
300
300 0
2
4
6
8
Heat source Q
10 5
x 10
0
20
40
60
Thermal conductance r
Temperature in K
(e) 450 400 350 300 0
0.01
0.02
0.03
0.04
0.05
Thikness ea
scatter plots, Figs. 2 and 3, can be used to investigate the behaviour of the model. These figures reveal linear, nonlinear or other unexpected relationships between input parameters and outputs. Specifically, scatter plots (Fig. 2) clearly shows the dominant effect of Q on S1 than that of ea , and the ordering of the input factors by their influence on S1 is Q > ea > D > r > K I C .
(38)
Such a conclusion is drawn from Fig. 2, as there is a better pattern in the plot for Q than for ea , and so on. Correlation provides a measure of the strength of the linear relationship between an input X j and the outcome Y , a positive value indicating that X j and Y tend to increase and decrease together and a negative value indicating that X j and Y tend to move in opposite directions. The LCCs associated with the scatter plots Figs. 2 and 3 are very significants: there is an absence of a linear association between both outputs (S1 and S2 ) and the thermal conductivity K I C and the thermal conductance r . Furthermore, S1 is more sensitive to Q than it is to ea and than it is to D moreover Q also shows strong linear behaviour as demonstrated by ρ(Q, S1 ) = 0.78. Results for the second output S2 are given in Fig. 3, we note herein that the thickness ea does not have the same effect as for S1 ,
123
while the LCCs related to Q and D are equivalent, with a linear behaviour. As we have seen in the sensitivity analysis section, it is hard to assess variable importance from the ordinary regression coefficients given because of the effect of units and distribution assumptions, thus numerical results for ORC is not presented. Regression results obtained from the three linear sensitivity methods, given in 2, are very similar. In particular, Ridge regression, SRC and NRC identify Q as the most important parameter; also indicate an effect for ea and D; all methods do not indicate an important effect for K I C and r . Further, the regression model is usually associated with the adjusted coefficient of determination R 2 . For both outputs, it is of the order of 62 and 66 % for the S1 and S2 outputs, respectively. This indicates that the regression model is not very successful in accounting for the uncertainty in outputs. Box plot is usually preferable to means and standard deviations because of the large amount of uncertainty information that is lost in the calculation of means and standard deviations. Box plots are particularly useful when it is desired to the display and compare the uncertainty in a number of related parameters. From the OAT technique (Figs. 4c, 5c, 6), we show the strong linear dependency between the heat source Q and the outcomes S1 and S2 . The thickness ea con-
J Comput Electron
(b)
(a) Temperature in K
Fig. 4 One factor at a time analysis sensitivity of S1 resulting from changes in only one parameter at each time, thus is perturbed, a, thermal conductivity, b, inflow rate, c, heat source, d, thermal conductance, e, thickness
328
450
327.5
400
327
350
326.5
0
50
100
150
300
0
0.002 0.004 0.006 0.008
Conductivity KIC
(d)
(c) Temperature in K
0.01
Inflow rate D
330
340
320
335
310
330
300
0
2
4
6
8
Heat source Q
10 5
x 10
325
0
20
40
60
80
100
Thermal conductance r
Temperature in K
(e) 450 400 350 300
0
0.01
0.02
0.03
0.04
0.05
Thikness ea
trols much more the effect S1 than the inflow rate D than the heat source Q than r and K I C . The second output S2 depends much more on D than on Q and ea , while there is no visually effect of S2 related to K I C and r . Figure 7 shows the mean and the standard deviation of each parameter effects from the MOAT analysis (red plots correspond to the S1 output and blue plots to the S2 output). Parameters with a low significance in terms of output variance appear at the bottom left of the plot. Those in the bottom right segment have a high linear effect on the model output and those in the upper portion show strong nonlinear or interactive effects. The MOAT analysis results are closely related to those obtained before. Specifically, results regarding S1 show that the heat source Q appears at the bottom right of the figure showing a strong linear effect. The inflow rate D shows a strong mean effect and a high variance, indicating that the sensitivity to this parameter strongly depends on the values of the others. The thickness ea appears to have the strong linear effect but with an important variance. In the other side, from results regarding S2 , it appears that D and Q have the same effect as for S1 , and contrary to S1 , ea affects slightly
S2 . The remaining parameters K I C and r have very small effect on both outputs. One drawback with the Morris design is that it only gives an overall measure of the interactions, indicating whether interactions exists, but it does not reveal which are the most significant; Sobol’s method is then used. However, the determination of Vi jk , Vi jkl , ..., V12... p with Sobol approach is very demanding computationally and typically is not done. Thus, only the second order indices were calculated in this work. We present below results illustrated in Tables 2 and 3: the parameter that has the most influence on the variance of the S1 output (in the sense of the total indice) is the inflow rate D, with total indice of 0.43 and about 20 % of the explained variance of S1 alone. ea appears to have an important effect on S1 , with a total indice of 0.32. As the parameter D, ea contributes about 20 % of the variance of S1 . The parameter Q only occurs alone since its first order indice is equivalent to the total indice, explaining about 16 % of the variance of S1 . Results regarding the output S2 are very similar. As showed in the MOAT analysis, the two parameters K I C and r have no important influence on S1 output, and are insignificant effect on S2 .
123
J Comput Electron
(a) Temperature in K
Fig. 5 One factor at a time analysis sensitivity of S2 resulting from changes in only one parameter at each time, thus is perturbed, a, thermal conductivity b, inflow rate c, heat source d, thermal conductance, e, thickness
(b)
310.525
450
310.52
400
310.515
350
310.51
0
50
100
150
300
0
0.002 0.004 0.006 0.008
Conductivity KIC
0.01
Inflow rate D
(d) Temperature in K
(c) 315
310.65
310
310.6
305
310.55
300
0
2
4
6
8
10
310.5
0
5
Heat source Q
20
40
60
80
100
Thermal conductance r
x 10
Temperature in K
(e) 325 320 315 310
0
0.01
0.02
0.03
0.04
0.05
Thikness ea
(a)
(b) 450
450
400
400
Temperature in K
Temperature in K
Fig. 6 Boxplot of S1 : a and S2 : b using one factor at a time
350
300
300
1 2 3 4 5 Outputs S 1: Kic, 2: D, 3: Q, 4: r, 5: ea 1
123
350
1 2 3 4 5 Outputs S 1: Kic, 2: D, 3: Q, 4: r, 5: ea 2
J Comput Electron 0.35
– The strong linear contribution of the heat source Q on both outputs, – Taking into account interactions with other parameters, the inflow rate D has the major contribution to the uncertainties in the predictions, and – The thickness ea is the second major contributor for both outputs.
2
0.3
Standard deviation
2
0.25 0.2 0.15 0.1
Acknowledgments
5
0.05
This work was supported by the opus Project.
1 4 1 4
0
5
0
3
3
0.05
0.1
0.15
0.2
0.25
0.3
0.35
References
Mean effect
Fig. 7 Morris one factor at a time analysis for S1 plotted by red diamonds and S2 blue squares: 1: K ic , 2: D, 3: Q, 4: r , 5: ea Table 2 Ridge, standardized and normalized regression coefficients Ridge coeff. Input S1 KIC D
S2 0.1282
0.0805
Standardized coeff.
Normalized coeff.
S1
S1
S2
0.0093
0.0061
0.0021
−12.3814 −8.2187 −0.1989 −0.2133
Q
35.4837
8.3726
r
−3.6607
ea
18.7737
0.7228
S2 0.0005
−0.0487 −0.0203
0.3502
0.1602
0.2805 −0.0839
0.0012
−0.0186
0.0001
1.0719
0.0805
0.0906
0.0076
0.3701
0.0301
Table 3 Evaluation of Sobol sensitivity indices Si and SiT for S1 and S2 outputs SK I C
SD
SQ
Sr
Sea
S KT I C
T SD
T SQ
SrT
SeTa
S1 0.01
0.21 0.16 0.03 0.19 0.06
0.43 0.18 0.08 0.32
S2 ≈ 0
0.23 0.12 0.03 0.15 ≈ 0
0.49 0.15 0.06 0.27
5 Conclusion The CRB model is applied successfully in this work to a simplified 2D heat transfer equation in electronic components. A number of approaches to sensitivity analysis was carried out, furthermore the obtained results were similar and complementary. The linear sensitivity, one factor at a time design, MOAT and Sobol’s method analysis have been applied successfully to identify critical input parameters making the major contribution to the uncertainties in the predictions and research priorities to improve risk assessments with the model. The sensitivity analysis demonstrates: – The weak contribution of the thermal conductivity K I C and the thermal conductance r on both outputs.
1. Barrault, M., Maday, Y., Nguyen, N.C., Patera, A.: An empirical interpolation method: application to efficient reduced-basis discretization of partial differential equations. C. R. Acad. Sci. Paris. 339, 667–672 (2004) 2. Carlberg, K., Bou-Mosleh, C., Farhat, C.: Efficient non-linear model reduction via a least-squares Petrov-Galerkin projection and compressive tensor approximations. Int. J. Numer. Meth. Eng. 86(2), 155–181 (2011) 3. Carle, A., Fagan, M.: ADIFOR Overview. Rice University report CAAM-TR-00-02 (2000) 4. Chaturantabut, S., Sorensen, D.C.: Application of POD and DEIM to dimension reduction of nonlinear miscible viscous fingering in porous media. Math. Comput. Model. Dyn. Syst. 17(4), 337–353 (2011) 5. Giering, R.: Tangent linear and Adjoint model compiler, Users manual. (2000). http://www.autodiff.com/tamc 6. Haasdonk, B., Ohlberger, M.: Reduced basis method for finite volume approximations of parameterized linear evolution equations. M2AN, Math. Model. Numer. Anal. 42(2), 277–302 (2008) 7. Helton, J.C., Johnson, J.D., Sallaberry, C.J., Storlie, C.B.: Survey of sampling-based methods for uncertainty and sensitivity analysis. Reliab. Eng. Syst. Safe. 91, 1175–1209 (2006) 8. Iman, R.L.: Uncertainty and sensitivity analysis for computer modeling applications. Reliab. Technol. 28, 153–168 (1992) 9. Janssen, P.H.M.: Assessing sensitivities and uncertainties in models: a critical evaluation. In: Grasman, J., Van Straten, G. (eds.), Predictability and Nonlinear Modelling in Natural Sciences and Economics, pp. 344–361, Kluwer Academic Publishers, Dordrecht, (Proceedings of the 75th Anniversary Conference of WAU, April 5–7, Wageningen) (1993) 10. Janssen, P.H.M., Heuberger, P.S.C., Sanders, R.: A Software Package for Sensitivity and Uncertainty Analysis. National Institute of Public Health and Environmental Protection, Bilthoven (1992) 11. Løvgren, A.E., Maday, Y., Rønquist, E.M.: A reduced basis element method for the steady Stokes problem: application to hierarchical flow systems. Modeling Identif. Control 27(2), 79–94 (2006) 12. Maday, Y., Patera, A., Turinici, G.: Global a priori convergence theory for reduced-basis approximations of single-parameter symmetric coercive elliptic partial differential equations. C. R. Acad. Sci. Paris. 335, 289–294 (2002) 13. Marquardt, D.W., Snee, R.D.: Ridge regression in practice. Am. Stat. 29(1), 3–20 (1975) 14. McKay, M.D., Beckmann, R.J., Conover, W.J.: A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21(2), 239–245 (1979) 15. Morris, M.D.: Factorial sampling plans for preliminary computational experiments. Technometrics 33(2), 161–174 (1991)
123
J Comput Electron 16. Nguyen, N.C., Rozza, G., Patera, A.: Reduced basis approximation and a posteriori error estimation for the time-dependent viscous Burgers’ equation. Calcolo 46, 157–185 (2009) 17. OPUS (2009) Spécifications du projet OPUS, Agence Nationale de la Recherche, D-WP1/08/04/A 18. Patera, A., Rozza, G.: Reduced basis approximation and a posteriori error estimation for parametrized partial differential equations. Arch. Comput. Methods Eng. 15, 229–275 (2008) 19. Prud’homme, C.: Feel++: a modern and unified c++ implementation of finite-element and spectral-elements methods in 1d, 2d and 3d: overview and applications. In: International Congress on Industrial and Applied Mathematics, 16–20 July 2007. http://www. feelpp.org 20. Prud’homme, C., Rovas, D., Veroy, K., Patera, A.: Mathematical and computational framework for reliable realtime solution of parametrized partial differential equations. ESAIM. Math. Model. Numer. Anal. 36(5), 747–771 (2010) 21. Prud’homme, C., Rovas, D., Veroy, K., Maday, Y., Patera, A., Turinici, G.: Reliable real-time solution of parametrized partial differential equations: reduced-basis output bounds methods. J. Fluids Eng. 124, 70–80 (2002) 22. Ratto, M., Tarantola, S., Saltelli, A.: Sensitivity analysis in model calibration: GSA-GLUE approch. Comput. Phys. Commun. 136(3), 212–224 (2001) 23. Rozza, G.: Shape design by optimal flow control and reduced basis techniques: Applications to bypass configuration in haemodynamics. Ph.D. Thesis, Ecole Polytechnique Fédérale de Lausanne (2008)
123
24. Saltelli, A., Annoni, P.: How to avoid a perfunctory sensitivity analysis. Environ. Modell. Softw. 25, 1508–1517 (2010) 25. Saltelli, A., Chan, K., Scott, E.M.: Sensitivity Analysis. Wiley Series in Probability and Statistics. Wiley, New York (2000) 26. Saltelli, A., Tarantola, S., Compolongo, F.: Sensitivity analysis as an ingredient of modeling. Stat. Sci. 15, 377–395 (2000) 27. Saltelli, A., Tarantola, S., Chan, K.P.S.: A quantitative modelindependent method for global analysis of model-output. Technometrics. 41, 39–49 (1999) 28. Shiriaev, D., Griewank, A., Utke, J.: A user guide to ADOL-F: automatic differentiation of fortran codes. Technical Report Technical University of Dresden, Department of Mathematics (1996) 29. Souhar, O., Faure, J.B.: Automatic differentiation strategy for the local sensitivity analysis of a one-dimensional hydraulic model. Int. J. Numer. Meth in Fl. 66(2), 253–268 (2011) 30. Soutter, M., Musy, A.: Global sensitivity analysis of three pesticide leaching models using a Monte Carlo Approach. Trans. ASAE. 43(4), 883–895 (1999) 31. Veroy, K., Prud’homme, C., Patera, A.: Reduced-basis approximations of the viscous Burgers equations: rigorous a posteriori error bounds. C. R. Acad. Sci. Paris. 337, 619–624 (2003)