ENBIS-17 in Naples

9 – 14 September 2017; Naples (Italy) Abstract submission: 21 November 2016 – 10 May 2017

My abstracts


The following abstracts have been accepted for this event:

  • BivRegBLS: A New R Package in Method Comparison Studies with Tolerance Intervals and (Correlated)-Errors-in-Variables Regressions

    Authors: Bernard Francq (GSK), Marion Berger (Sanofi)
    Primary area of focus / application: Metrology & measurement systems analysis
    Secondary area of focus / application: Quality
    Keywords: Method comparison studies, Tolerance intervals, Bland-Altman, R, Errors-in-variables regressions, BivRegBLS
    Submitted at 6-Mar-2017 23:55 by Bernard Francq
    11-Sep-2017 12:00 BivRegBLS: A New R Package in Method Comparison Studies with Tolerance Intervals and (Correlated)-Errors-in-Variables Regressions
    The need of laboratories to quickly assess the quality of samples leads to the development of new measurement methods. These methods should lead to results comparable with those obtained by a standard method.
    Two main methodologies are presented in the literature. The first one is the Bland–Altman approach with its agreement intervals (AIs) in a (M=(X+Y)/2,D=Y-1) space, where two methods (X and Y) are interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors.
    A consistent correlated-errors-in-variables (CEIV) regression is introduced as the errors are shown to be correlated in the Bland–Altman plot. The coverage probabilities collapse drastically and the biases soar when this correlation is ignored. Robust novel tolerance intervals (based on unreplicated or replicated designs) are shown to be better than AIs, and novel predictive intervals in the (X,Y) plot and in the (M,D) plot are introduced with excellent coverage probabilities.
    Guidelines for practitioners will be discussed and illustrated with the new and promising R package BivRegBLS. It will be explained how to model and plot the data in the (X,Y) space with the BLS regression (Bivariate Least Square) or in the (M,D) space with the CBLS regression (Correlated-BLS) by using BivRegBLS. The main functions will be explored with an emphasis on the output and how to plot the results.
  • Advanced Run-to-Run Controller in Semiconductor Manufacturing with Real-time Equipment Condition

    Authors: Wei-Ting Yang (École des Mines de Saint-Étienne), Jakey Blue (École des Mines de Saint-Étienne), Agnès Roussy (École des Mines de Saint-Étienne), Marco Reis (University of Coimbra)
    Primary area of focus / application: Process
    Secondary area of focus / application: Modelling
    Keywords: Advanced Process Control (APC), Fault Detection and Classification (FDC), Principal Component Analysis (PCA), Canonical Correlation
    Submitted at 7-Mar-2017 15:09 by Wei-Ting Yang
    13-Sep-2017 09:40 Advanced Run-to-Run Controller in Semiconductor Manufacturing with Real-time Equipment Condition
    The Run-to-Run (R2R) controller is the main process control approach in the semiconductor industry. Currently, these controllers are mainly updated with key process parameters and metrology data while equipment behavior is not taken into account in the R2R model. However, the natural deterioration and the real-time condition of the equipment, though may not necessarily affect the product quality directly. Therefore, equipment signals and the process states shall be analyzed in order to enhance the R2R model, beyond the strict use of process physics concepts - the currently established practice.
    In this research, in order to extract more potential factors and exploit data more fully, the equipment state is assessed by leveraging the Fault Detection and Classification (FDC) data. It is an opportunity to analyze whether the FDC parameters that play critical roles can be explicitly used in the R2R model, i.e., if the generated regulations should take advantage of the existing FDC readings. Furthermore, since FDC data come with higher resolution (wafer-based multivariate temporal profile) while the metrology is constrained to the sampling strategy, FDC may provide more comprehensive information.
    In this talk, we aligned the three types of data sources and report on the relational model that is under development. With the goal of finding the critical FDC signals or the composite indicators, we link the R2R and the metrology via the equipment behavior and validate the original path of R2R regulation purely based on the process physics.
  • Association Rules and Compositional Data Analysis: Implications to Big Data

    Authors: Ron S. Kenett (KPA Ltd. and Neaman Institute), J.A. Martín-Fernández (University of Girona), S. Thió-Henestrosa (University of Girona), M. Vives-Mestres (University of Girona)
    Primary area of focus / application: Other: Categorical data
    Keywords: Association Rules (AR), Itemsets, Relative linkage disequilibrium (RLD), Compositional Data (CoDa), Subcompositional coherence, Big Data, Categorical data
    Submitted at 8-Mar-2017 09:37 by Ron Kenett
    Accepted (view paper)
    12-Sep-2017 09:00 Association Rules and Compositional Data Analysis: Implications to Big Data
    Many modern organizations generate a large amount of transaction data, on a daily basis. Transactions typically include semantic descriptors that require specialised methods for analysis. Association rule (AR) mining is a powerful semantic data analytic technique used for extracting information from transaction databases. AR was originally developed for basket analysis where the combination of items in a shopping basket are evaluated to determine prevalence. To generate an AR, the collection of more frequent itemsets—a set of two or more items—must be detected. Then, as a second step, all possible ARs are generated from each itemset. The ARs are then ranked using measures of association labelled, in this context, “measures of interestingness”. The R package “arules” provides more than a dozen such measures including the relative linkage disequilibrium (RLD) which normalises classical Euclidean distances of the itemset from a surface of independence. Because an AR can be expressed as a contingency table it is an element of the simplex, the sample space of the compositional data (CoDa). It is well known that CoDa methodology provides nice properties such as subcompositional coherence and scalability. In this work we explore the contributions of CoDa methods to AR mining in big data analysis. The talk will focus on such aspects, including the dynamic visualization of CoDa-AR measures on a simplex representation of the itemsets and its multidimensional extension using log-ratio coordinates.
  • The Choice of Screening Design

    Authors: Muhammad Azam Chaudhry (Norwegian University of Science and Technology, Norway), John Sølve Tyssedal (Norwegian University of Science and Technology, Norway)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Design and analysis of experiments
    Keywords: Factor screening, Definitive screening design, Minimum run resolution IV design, Plackett-Burman design
    Submitted at 8-Mar-2017 10:16 by Muhammad Azam Chaudhry
    12-Sep-2017 09:20 The Choice of Screening Design
    A screening design is an experimental plan used for identifying the expectedly few active factors from potentially many. Normally two-level designs have been the preferred screening designs, but today the practitioners face many alternatives in the choice of a screening design. In addition to regular, nonregular and minresIV two-level designs, it has lately been introduced a fourth class of screening designs, the definitive screening designs which are three-level designs and also allow the estimation of a certain number of quadratic effects. However, which of these designs do actually perform best for screening? Naturally, one has never tested out designs from the four respective classes together in the same practical experimental situation. In this presentation we present the result of a simulation study to compare the screening performances for three designs, a nonregular, a minresIV and a definite screening design under various scenarios, some with quadratic effects present and some without. We end up giving some recommendations and some warnings that we think will be useful for the practitioners in the choice of a screening design for his practical application.
  • Constrained Functional Time Series: An Application to Demand and Supply Curves in the Italian Natural Gas Balancing Platform

    Authors: Antonio canale (Università di Padova)
    Primary area of focus / application: Other: ENBIS Young Statistician Award
    Keywords: Autoregressive model, Demand and offer model, Energy forecasting, Functional data analysis, Functional ridge regression
    Submitted at 9-Mar-2017 09:54 by Antonio canale
    In Europe several legislative and infrastructural measures have been undertaken to regulate energy markets. In Italy, for example, we have assisted to the recent introduction of the natural gas balancing platform, a system in which gas operators virtually sell and buy natural gas in order to balance the common pipelines network. Basically, the operators daily submit demand bids and supply offers which are eventually sorted according to price. Demand and supply curves are hence obtained by cumulating the corresponding quantities.

    Motivated by market dynamic modelling in the Italian natural gas balancing platform, we propose a model to analyze time series of monotone functions subject to an equality and inequality constraint at the two edges of the domain, respectively, such as daily demand and offer curves. In detail, we provide the constrained functions with a suitable pre-Hilbert structure and introduce a useful isometric bijective map associating each possible bounded and monotonic function to an unconstrained one. We introduce a functional-to-functional autoregressive model that is used to forecast future demand/offer functions. We estimate the model via minimization of a penalized mean squared error of prediction with a penalty term based on the Hilbert-Schmidt squared norm of autoregressive lagged operators.
    The approach is of general interest and is suited for generalization in any situation in which one has to deal with functions subject to the above constraints which evolve through time.
  • Modelling Degradation Data Using the Gamma Process and Its Generalizations

    Authors: Massimiliano Giorgio (Università della Campania "Luigi Vanvitelli"), Gianpaolo Pulcini (Istituto Motori, National Research Council (CNR))
    Primary area of focus / application: Other: Statistical analysis of industrial reliability and maintenance data
    Keywords: Degradation, Gamma process, Transformed Gamma process, Covariates, Random effect, Residual lifetime prediction
    Submitted at 9-Mar-2017 17:52 by massimiliano Giorgio
    12-Sep-2017 09:40 Modelling Degradation Data Using the Gamma Process and Its Generalizations
    Many units degrade during their life and fail when their degradation level exceeds a prefixed threshold limit. Hence, once their degradation level is described via a proper stochastic process, it is possible to formulate the unit reliability in terms of the first passage time of the degradation level to the threshold limit. This approach, that allows to link the failure time of a degrading unit to its degradation level, offers several advantages. In particular, it permits the unit reliability to be estimated by using the degradation data even when failure data are not available, and enables one to predict the residual life in order to perform condition-based maintenance activities.
    Among the degradation processes proposed in the literature, the Gamma process is probably the widest applied in the reliability field. Its key success factors are flexibility and mathematical tractability, which allow dealing with many different kinds of degradation phenomena with a very limited computational burden. Relying on the latter feature, many authors have proposed generalizations of the gamma process, to use in specific practical settings. The aim of this paper is to survey some of these generalizations. In particular, we will focus on models that incorporate covariates, models that can account for the presence of random effects, models that assume the dependence of the degradation increment on the current degradation level (or state) of the units, and combinations thereof. All these models will be presented and examples of applications to real degradation data will be illustrated.