ENBIS-14 in Linz

21 – 25 September 2014; Johannes Kepler University, Linz, Austria Abstract submission: 23 January – 22 June 2014

My abstracts


The following abstracts have been accepted for this event:

  • A Model for \Delta T Approximation in Power Semiconductor Devices with Different DMOS Areas based on Energy Ramp Up Tests

    Authors: Olivia Bluder (KAI - Kompetenzzentrum für Automobil- und Industrieelektronik GmbH), Christoph Schreiber (Infineon Technologies Austria), Michael Ebner (Infineon Technologies Austria), Michael Glavanovics (KAI - Kompetenzzentrum für Automobil- und Industrieelektronik GmbH)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Mining
    Keywords: Semiconductors, Reliability, Modelling, SOA definition, \Delta T approximation
    Submitted at 5-May-2014 13:23 by Olivia Bluder
    Accepted (view paper)
    22-Sep-2014 12:35 A Model for \Delta T Approximation in Power Semiconductor Devices with Different DMOS Areas based on Energy Ramp Up Tests
    At semiconductor lifetime testing temperature is one of the driving forces leading to device failure. In literature various lifetime models exist, most of them depend either on the temperature rise (\Delta T) or the peak temperature (Tpeak), e.g. Arrhenius, Coffin Manson [1]. This implies that the knowledge about the temperature in the device is substantial to derive a reliable model. If measuring the exact temperature on a device during operation is not possible, commonly, electro-thermal FEM simulations are carried out instead [2]. These FEM simulations are time consuming. To reduce the effort we investigated mathematical approaches to approximate the temperature by physics based relationships.

    The data set for this investigation contains results from energy ramp up (ERU) [3] tests of one power semiconductor device type. ERU tests are carried out at constant ambient temperature, pulse width and voltage. Via pre-defined current steps the energy is increased until the device fails. Results are available for 4 different DMOS areas, tested at 3 different ambient temperatures and 5 different pulse widths.

    Previous publications [3] show that with these measurement data \Delta T of a device can be approximated. For this purpose, the non-linear relationship given by Glavanovics and Zitta [4] that links \Delta T to the maximum power, the pulse width and a thermal constant, is used. The thermal constant is a device specific value that accounts for structural and material properties. Commonly, it is assumed that for equal device types with different DMOS areas, the constant scales with the DMOS area, which means that the model is applicable for changes in DMOS area. The given data show that a simple scaling is not sufficient, because not only the thermal constant shows a non-linear dependency on the area, but also the exponent of the pulse width. Based on these observations on the measured data and on physical assumptions we extended the current \Delta T model to include the DMOS area. With the added parameters we are able to model \Delta T for different DMOS areas of one device type and varying pulse length.

    [1] L.A. Escobar and W.Q. Meeker, A review of accelerated test models, Statistical Science, 21, pp.552-577, 2006.
    [2] M. Bernadoni, “Thermal and electro-thermal modeling of electronic devices and systems for high-power and high-frequency applications”, PhD thesis, Università degli Studi di Parma, 2012.
    [3] A. Waukmann and M. Glavanovics, “Energy-Ramp-Up Test Method for SOA Definition of Smart Power Switches with Application Relevant Stress Pulses”, in Austrochip, Villach, 2010
    [4] M. Glavanovics and H. Zitta, "Thermal Destruction Testing: an Indirect Approach to a Simple Dynamic Thermal Model of Smart Power Switches", in ESSCIRC, Villach, 2001.
  • Optimal Designs Subject to Cost Constraints in Simultaneous Equations Models

    Authors: Jesus Lopez-Fidalgo (University of Castilla-La Mancha), Victor Casero Alonso (University of Castilla-La Mancha)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Business
    Keywords: Approximate design, Exact design, L-optimal design, Cost constraints, Multiplicative algorithm, Simultaneous equations, Structural equations
    Submitted at 8-May-2014 13:12 by Jesus Lopez-Fidalgo
    Accepted (view paper)
    22-Sep-2014 11:55 Optimal Designs Subject to Cost Constraints in Simultaneous Equations Models
    A procedure for computing optimal experimental designs subject to cost constraints in models with simultaneous equations is presented. A convex criterion function based on a usual criterion function and an appropriate cost function is considered. In particular, a specific L-optimal design problem is considered in an example from the literature related to promotion campaigns in a network of gas stations. Instead of using integer linear programming to obtain exact designs we use a multiplicative algorithm to find optimal approximate designs subject to a cost constraint function. One of the advantages of the method introduced in this paper is to reduce the computational efforts to compute optimal designs. This is based on a covariance matrix formulation that simplifies the calculations.
  • Official Statistics Data Integration to Enhance Information Quality

    Authors: Luciana Dalla Valle (Plymouth University), Ron Kenett (KPA Ltd., University of Turin and NYU Poly)
    Primary area of focus / application: Quality
    Secondary area of focus / application: Economics
    Keywords: Information quality (InfoQ), Data integration, Vines, Copulas, Bayesian networks, Administrative data, Official statistics
    Submitted at 15-May-2014 19:37 by Luciana Dalla Valle
    22-Sep-2014 12:15 Official Statistics Data Integration to Enhance Information Quality
    Information quality, or InfoQ, is defined by Kenett and Shmueli (2014) as “the potential of a data set to achieve a specific (scientific or practical) goal by using a given empirical analysis method”. This concept is more broad and articulated than data and analysis quality. InfoQ is based on the identification of four interacting components: the analysis goal, the data, the data analysis, and the utility; and it is assessed through eight dimensions: 1) data resolution, 2) data structure, 3) data integration, 4) temporal relevance, 5) generalizability, 6) chronology of data and goal, 7) operationalization and 8) communication.
    This paper illustrates, with different case studies, a novel strategy to increase InfoQ based on the integration of official statistics data using copulas and Bayesian Networks. The ability to conduct such an integration is becoming a key requirement in combining official statistics such as census or survey based data with administrative data being routinely aggregated in operational systems.
    Official statistics are extraordinary sources of information about many aspects of the citizens’ life, like health, education, public and private services, as well as the economic climate, the financial situation and the environment. However, many studies fail to consider the importance of these fundamental sources of information, leading to a poor level of InfoQ, implying low valued statistical analyses and poor informative results.
    The use of copulas and Bayesian Networks, allows us to calibrate official statistics and organizational or administrative data, strengthening the quality of the information derived from a survey, and enhancing InfoQ. It also provides an ability to conduct studies with dynamic updates using structured and unstructured data thus enhancing several of the InfoQ dimensions.
  • Megavariate and Multiscale Monitoring of the Process NETworked Structure: M2NET

    Authors: Tiago Rato (University of Coimbra), Marco Reis (University of Coimbra)
    Primary area of focus / application: Process
    Keywords: Process monitoring of the correlation structure, Multiscale dynamical processes, Partial correlations, Sensitivity enhancing data transformations, Causal network, Wavelet transform
    Submitted at 18-May-2014 19:56 by Tiago Rato
    Accepted (view paper)
    23-Sep-2014 16:40 Megavariate and Multiscale Monitoring of the Process NETworked Structure: M2NET
    Current industrial processes encompass several underlying phenomena spanning different regions of time and frequency. This is a consequence of the system’s multiscale nature, which is also reflected in the patterns exhibited by the data they generate. Classical megavariate monitoring methods are not effective for this type of processes, as they were design to detect changes occurring at a single and rather specific scale. Moreover, as they are mostly dedicated to detect deviations on the process’ mean [1], there is a significant lack of proposals for specifically monitor the correlation structure of multiscale systems.
    One approach for handling such type of systems was proposed by Bakshi (1998) [2]. The basic idea is to decompose the original observations into a set of scale-dependent wavelet coefficients, which are then monitored simultaneously through parallel control charts. From the monitoring of the wavelet coefficients, the relevant scales can be selected and used to reconstruct the signal in the original domain, in what effectively corresponds to a feature extraction stage of the fault signature. Then, the reconstructed signal is subjected to a confirmatory assessment of the actual state of the process. Even though this methodology is conceptually simple, its implementation in the case of monitoring the correlation structure is not straightforward. For instance, the current on-line approaches tend to resort to EWMA like recursions [3-5], which cannot be applied at the data reconstruction level, since the scales included in the reconstructions are not always the same. Given these considerations, a new multiscale procedure involving monitoring statistics based on partial correlations and variables’ sensitivity enhancing transformations (SET) is proposed. This procedure allows for a finer description of the process, since the inner relationships between the variables are analyzed at each scale through partial correlations. The monitoring procedure is also more focused in the time-frequency scales related with the fault and therefore a better isolation of the fault’s signature is obtained and consequently the detection performance is improved.
    The obtained results show that the proper modelling of the process network through the SET is a major factor in the detection of structural changes. In fact, it was observed that even single-scale monitoring statistics can achieve the same level of detection capability as their multiscale counterparts when a proper SET is employed. However, the multiscale approach still proved to be useful since it led to comparable results using a much simpler model. Therefore, the application of a wavelet decomposition is advantageous for systems that are difficult to model, providing a good compromise between modeling complexity and monitoring performance.

    1. Yen, et al., Quality and Reliability Engineering International, 2012. 28(4): p. 409-426.
    2. Bakshi, AIChE Journal, 1998. 44(7): p. 1596-1610.
    3. Reynolds, et al., Journal of Quality Technology, 2006. 38(3): p. 230-252.
    4. Hawkins, et al., Technometrics, 2008. 50(2): p. 155-166.
    5. Bodnar, et al., Computational Statistics & Data Analysis, 2009. 53(9): p. 3372-3385.
  • Exploiting Uncertainty Information for Empirical Model Building in Process Industries

    Authors: Marco P. Seabra dos Reis (Department of Chemical Engineering, University of Coimbra), Ricardo Rendall (Department of Chemical Engineering, University of Coimbra), Swee-Teng Chin (Analytical Tech Center, The Dow Chemical Company), Leo Chiang (Analytical Tech Center, The Dow Chemical Company)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Process
    Keywords: Measurement uncertainty, Modelling heteroscedasticity, Principal Components Regression, Partial Least Squares, Weighted Least Squares, Ordinary Least Squares
    Submitted at 18-May-2014 23:54 by Marco P. Seabra dos Reis
    23-Sep-2014 16:20 Exploiting Uncertainty Information for Empirical Model Building in Process Industries
    In the analysis of data collected from the Chemical Processing Industries (CPI), it is a recurrent situation that only output variables (the Y’s) associated with quality related variables of the end products have associated significant amounts of uncertainty. Process variables (the X’s) consist of measurements such as temperature, pressure and flow readings of the various streams, or more recently of spectra (NIR, NMR, etc.), whose uncertainties are comparatively low and therefore usually neglected with very good approximation. This additional piece of information, the uncertainty associated with the Y’s, typically presents heteroscedasticity and can potentially enhance the quality of the models developed if properly estimated, or act as an additional source of noise, if poorly handled.
    In this talk, we will address the issue of deriving predictive models is situations closer to the ones found in CPIs, through a Monte Carlo study encompassing several models with high dimension, a variety of noise levels, and different levels of knowledge regarding the uncertainty structure of data. Different modelling frameworks are tested under these scenarios, such as those based on OLS, WLS, several versions of PCR and PLS.
    In the end, useful guidelines can be extracted regarding the best methods to use and the added value of obtaining more information regarding the uncertainty affecting the responses.
  • The Challenge of Testing Autonomous Vehicles

    Authors: Luigi del Re (Johannes Kepler University Linz)
    Primary area of focus / application: Reliability
    Keywords: Reliability, Transportation
    Submitted at 19-May-2014 08:58 by Kristina Lurz
    22-Sep-2014 14:30 Keynote by Luigi del Re: The Challenge of Testing Autonomous Vehicles
    The impressive reliability of modern transportation systems is not a trivial issue. Indeed, most vehicles commercially available on the market are the result of a large scale production. This presents a real challenge for reliability, as the unknown user has very high expectations but the producer does not know who and how will use them.

    The impressive reliability of modern transportation systems is to a very large extent the effect of the systematic development of testing methods which allow testing a specific “deterministic” test object (the vehicle) in a large number of “stochastic” situations. This is done first at component level on specific test benches, but then by fleet tests.

    In the case of autonomous or semiautonomous vehicles, the problem complexity is largely increased by the fact that a control algorithm takes over the place of the driver and the producer becomes responsible – if not legally, at least commercially - not only for the performance of the vehicle but also for the way it is driven. This increases enormously the number of situations to be considered, making standard methods, like fleet testing, unaffordable.

    This keynote gives an overview on different approaches being analyzed by different groups to cope with this problem.