ENBIS-11 in Coimbra

4 – 8 September 2011 Abstract submission: 1 January – 25 June 2011

My abstracts

 

The following abstracts have been accepted for this event:

  • Bayesian Analysis of Very Small Unreplicated Experiments

    Authors: Víctor Aguirre-Torres Román de la Vara
    Affiliation: Instituto Tecnológico Autónomo de México (ITAM)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Active effects , Screening experiments , Nonormal response , Generalized linear models , MCMC
    Submitted at 7-Apr-2011 21:29 by Victor Aguirre
    Accepted (view paper)
    5-Sep-2011 12:10 Bayesian Analysis of Very Small Unreplicated Experiments
    There are situations when resources are so scarce that very small unreplicated experiments are used. For example fractions with 8 or 4 experimental runs. Unreplicated experiments are typically analyzed using the Daniel plot of the effects. For small fractions the Daniel plot could be hard to interpret even if there are significant effects. For this reason we propose the use of two Bayesian tools for analyzing this kind of experiments: posterior probabilities that the effects are active, and the posterior density of the effects. The first tool is implemented in R for normal or generalized linear models to allow for non-normal responses, and the second tool is obtained using Winbugs. Examples and simulations are given to support the use of these tools.
  • Using Statistics to Monitor and Model an Information System: a Successful Case Study in the Microelectronic Industry.

    Authors: Lutz Michel Roustant Olivier Boucher Xavier
    Affiliation: STMicroelectronics / Ecole Nationale Supérieure des Mines de Saint-Etienne
    Primary area of focus / application: Modelling
    Keywords: Semiconductor manufacturing systems Information SystemsMonitoring Modelling Holt-Winters Robust standard-deviation estimator Multivariate statistical analysis
    Submitted at 8-Apr-2011 08:39 by Michel Lutz
    Accepted (view paper)
    5-Sep-2011 10:50 Using Statistics to Monitor and Model an Information System: a Successful Case Study in the Microelectronic Industry.
    This research is carried out within a semiconductor production plant of the firm STMicroelectronics. The company’s Department of Information Technology (IT) is collecting huge databases of information about the information systems (IS), to store performance and activities variables, called metrics. However, their exploitation is under-optimized, because their systematic analysis is not the priority of IT professionals. In this context, we are willing to develop statistical tools, helping IT professionals (i.e. understandable by non-statisticians) to take advantage of this amount of information. We are particularly interested in two activities of the ITIL (Information Technology Infrastructure Library) Capacity Management process: IS monitoring and IS modelling.

    As the starting point, STMicroelectronics IT Department was manually monitoring a selected set of performance and activities metrics. This was time-consuming and inefficient: only a little part of the IS was under control and many exceptional IS activities were not detected. We implemented a statistical answer, built on a Holt-Winters based monitoring coupled with a robust standard-deviation estimator. This solution is already up and running and allows a fully automated monitoring of several hundred metrics.

    The modelling activity is aiming at evaluating the interactions between business and IS activities. STMicroelectronics is doing this work by rule of thumb, without any support from a quantified tool. We are currently working on developing such a tool, based on a multivariate statistical analysis of the available metrics. At the present time, we have already identified and quantified some interesting patterns, helping at understanding several interactions between the different layers (resource-application-business) of the IS.

    Beyond the answer to the ITIL process, we are also happy to see that this work really helps at developing a “statistical culture” within the IT Department: brainstorming meetings and presentation of our research results are developing a positive emulation about the use of statistics.
  • Bayesian analysis of hierarchical codes with different levels of accuracy

    Authors: Loic Le Gratiet
    Affiliation: CEA & Université Paris VII Denis-Diderot
    Primary area of focus / application: Modelling
    Keywords: co-kriging , multi-level code , computer experiment , surrogate models , Gaussian process regression
    Submitted at 14-Apr-2011 11:48 by Loïc Le Gratiet
    Accepted (view paper)
    5-Sep-2011 10:30 Bayesian analysis of hierarchical codes with different levels of accuracy
    Large computer codes are widely used in engineering to study physical systems. Nevertheless, simulations can sometimes be time-consuming. In this case, conception based on an exhaustive exploration of the input space of the code is generally impossible under reasonable time constraints. Actually, a computer code can often be run at different levels of complexity and a hierarchy of levels of code can hence be obtained. The aim of our research is to study the use of several levels of a code to predict the output of a costly computer code. The presented multi-stage metamodel is a particular case of co-kriging which is a well known geostatistical method.

    We present a new approach to estimate the model parameters which is effective when many levels of codes are available. Furthermore, this approach allows us to consider prior information in the parameter estimation. We also address the problem of the co-kriging covariance matrix inversion when the number of levels is large. A solution to this problem is provided which shows that the inverse can be easily calculated. Finally, we deal with the problem of model validation. In particular, we present a virtual cross-validation method which gives the result of the Leave-One-Out procedure without building sub-metamodels.

    A thermodynamic example is used to illustrate a 3-level co-kriging. The purpose of this example is to predict the result of a physical experiment – which can be considered as the most costly code – modelled by an accurate computer code and by another one less accurate.
  • House of Reliability for Multi-State System

    Authors: Shuki Dror and Kobi Tsuri
    Affiliation: Department of Industrial Engineering and Management, ORT Braude College, Israel, Email: dror@braude.ac.il
    Primary area of focus / application: Reliability
    Keywords: MSS , Reliability , QFD , MSE
    Submitted at 15-Apr-2011 07:22 by Shuki Dror
    Accepted
    5-Sep-2011 10:10 House of Reliability for Multi-State System
    This paper presents an innovative method that enables a company to determine its vital activities in reliability program for multi-state system (MSS). The method is based on a House of Reliability for translating the system's failure costs into relative importance of corresponding activities listed in the reliability program. A Mean Square Error (MSE) criterion supports the selection of vital reliability program activities. It divides a set of activities in reliability program into two groups: vital few (activities in reliability program for MSS) and trivial many. The partition minimizes the overall MSE and so, delineates two homogeneous groups. A case study is presented to illustrate the application of the developed methodology in a warfare system (a tank). The vital reliability program activities - treatment routine and spare parts storage were found to be the best activities for reducing the costs of the tank's failures.
  • Asymmetric Limits in Modified Control Charts for Threshold GARCH Models

    Authors: Esmeralda Gonçalves*, Joana Leite**, Nazaré Mendes-Lopes*
    Affiliation: * CMUC, Department of Mathematics, University of Coimbra;**Institute of Accounting and Administration of Coimbra, IPC
    Primary area of focus / application: Modelling
    Keywords: Shewhart control charts , Average run length , Asymmetric control limits , TGARCH models , AMS Classification: 62L10, 62M10, 60G10
    Submitted at 15-Apr-2011 13:33 by Nazaré Mendes-Lopes
    Accepted (view paper)
    6-Sep-2011 15:45 Asymmetric Limits in Modified Control Charts for Threshold GARCH Models
    The class of threshold generalized ARCH processes (TGARCH) is receiving great interest in last years, namely as it takes into account different reactions in the volatility according to the sign of the process values, that is, they capture the so-called leverage effect very common in financial time series. Following [3], we introduced the modified Shewhart control charts for such models and evaluate the no alarm, until time n, in the in-control state taking, as usual, symmetric control limits in the run length definition [1].
    The asymmetric distribution of financial time series observed in some studies, [2], combined with the asymmetric characteristics of GTARCH models justify the convenience of generalizing that study using asymmetric control limits.
    The aim of this work is then to define and evaluate the in-control average run length (ARL) of the modified Shewhart charts for TGARCH processes using asymmetric control limits in the RL. So, we begin by studying the laws of the positive and negative processes associated to the TGARCH model.
    To evaluate the bounding quality, a simulation study is developed considering TGARCH models generated by asymmetric innovations using, in particular, a mixture of Gaussian distributions as the marginal law of the generator process.
    ____________
    [1] Gonçalves, Leite, Mendes-Lopes (2011) The ARL of modified Shewhart control charts for conditionally heteroskedastik models, Preprint 11-05, CMUC-DMUC, Coimbra University.
    [2] Hwang, Baek, Park, Choi (2010) Explosive volatilities for threshold GARCH generated by asymmetric innovations, Stat and Prob Letters, 80, 26-33.
    [3] Severin, Schmid (1999) Monitoring changes in GARCH processes, Allgemeines Statistisches Archiv, 83, 281-307.
  • Experimental Design: BIBD and PBIBD applications and links

    Authors: Teresa A. Oliveira and Amilcar Oliveira
    Affiliation: Center of Statistics and Applications, University of Lisbon (CEAUL); DCeT-Universidade Aberta
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Experimental Design , Randomized Block Design , BIBD , PBIBD , Hadamard Matrices
    Submitted at 18-Apr-2011 12:42 by Teresa Oliveira
    Accepted (view paper)
    6-Sep-2011 11:15 Experimental Design: BIBD and PBIBD applications and links
    The goal of any Experimental Design is to obtain the maximum amount of information for a given experimental effort, to allow comparisons between varieties and to control for sources of random variability. Block designs are used to control such sources since the main purpose of forming blocks is to maintain the homogeneity within blocks. The simplest block design is the Randomized Complete Block (RCB). When the number of varieties in an experiment increases, incomplete block designs with smaller block sizes can be adopted. Balanced Incomplete Block (BIB) and Partially Balanced Incomplete Block Designs (PBIBD) designs are two important types of such designs. A Balanced Incomplete Block Design (BIBD) is a randomized block design with number of varieties greater than the block size and with all pairs of varieties occurring equally often along the blocks. In some incomplete designs it is not possible to define a balance but there is still the possibility for the design to be partially balanced. BIBD and PBIBD have applications in areas as diverse as Agriculture, Industry, Genetics, Biology, Education Sciences and Cryptography. Thanks to their optimal properties these designs present also extremely relevant links, highlighted in Pure and Applied Mathematics. In our work we illustrate some applications and links considering BIBD and PBIBD.