ENBIS-17 in Naples

9 – 14 September 2017; Naples (Italy) Abstract submission: 21 November 2016 – 10 May 2017

My abstracts


The following abstracts have been accepted for this event:

  • Robustness of the Classical and Recently Developed Exponential Smoothing Methods

    Authors: Aleš Toman (University of Ljubljana, Faculty of Economics)
    Primary area of focus / application: Business
    Keywords: Exponential smoothing methods, Time Series forecasting, Robustness analysis, Extended Holt-Winters method
    Submitted at 17-Apr-2017 23:24 by Ales Toman
    Accepted (view paper)
    13-Sep-2017 10:50 Robustness of the Classical and Recently Developed Exponential Smoothing Methods
    Exponential smoothing methods are powerful tools for decomposing and smoothing time series and predicting their future values. Distinguished by their simplicity and computational stability, their forecasts are comparable to the forecasts of more complex statistical time series models. Classical smoothing methods can handle trend and seasonality in both, additive and multiplicative form, and the methods we developed recently (e.g. modified and extended Holt-Winters method) are adjusted even to substantial noise component and repeated zero values in the data.

    In this presentation we compare the smoothing and forecasting performance of the classical and recently developed smoothing methods when time series contain irregularities such as symmetric or asymmetric outliers, level shifts or structural breaks. We define and justify a new symmetric relative smoothing (or forecasting) efficiency measure (SREM) that allows us to evaluate the performance of different smoothing methods based on unequal-sized groups of time series.

    We start the analysis simulating time series with known trend and seasonal patterns and applying several smoothing methods to the series before and after we add the pre-specified irregularities to the data. The magnitude of changes in SREM values indicates the robustness of different methods with the methods undergoing largest changes being the most sensitive. Additionally we use real seasonal time series from the M3-Competition in place of simulated series to analyze the robustness of different methods when the true series pattern is not known. We conclude that robustness should in general not be overlooked by researchers when proposing and advocating new forecasting methods.
  • Bayesian Estimation of Calibration Functions

    Authors: Merlin Keller (EDF), Nicolas Fischer (LNE), Catherine Yardin (LNE), Marc Sancandi (CEA), Laurent Leblond (PSA), Benoît Savanier (CETIAT)
    Primary area of focus / application: Other: French SFdS session on Computer experiments and energy
    Keywords: Metrology, Measurement functions, Bayesian calibration, Error in variables
    Submitted at 18-Apr-2017 12:26 by Merlin Keller
    12-Sep-2017 12:40 Bayesian Estimation of Calibration Functions
    Calibration of measuring instruments is a daily task in any metrological field. The new definition of calibration of the International Vocabulary of Metrology (VIM) specifies a two-step procedure : first, a calibration function is estimated, and then, the estimated relation is used to predict a new measurement. Quantification of the uncertainties tainting the calibration function and the ensuing prediction is a key challenge. Several Bayesian regression approaches have been proposed recently to this end. However, these so not account for the error in variables which are specific to metrological applications. On the other hand, such errors are commonly accounted for in a frequentist context. We propose to bridge the gap by introducing a Bayesian error in variables approach, and show its benefits on several case studies.
  • Enumeration and Multi-Criteria Selection of Minimally Aliased Response Surface Designs

    Authors: José Núñez Ares (KU Leuven), Jeffrey Linderoth (UW Madison), Peter Goos (KU Leuven / Antwerp University)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Response surface design, Enumeration, Integer programming, High throughput computing
    Submitted at 18-Apr-2017 18:33 by Jose Nunez Ares
    12-Sep-2017 09:00 Enumeration and Multi-Criteria Selection of Minimally Aliased Response Surface Designs
    Response surface designs are a core component of the response surface methodology, which is widely used in the context of product and process optimization. In some applications, the standard response surface designs, such as Box-Behnken designs and central composite designs, are too large to be practically feasible. To address this problem, we identify a new class of response surface designs, which we can Minimally Aliased Response Surface Designs (MARSDs). These designs have similar attractive technical properties as standard response surface designs. For example, the main effects of the factors under investigation can be estimated independently, and there is no aliasing between the main effects and the second-order effects (two-factor interactions and quadratic effects). In this talk, we discuss the properties of the catalogs of MARSDs we constructed, provide a brief overview of the branch-and-prune algorithm we used to construct the catalogs, and we demonstrate the added value of the new designs using a garden sprinkler experiment.
  • What is REML and Why Does It Work?

    Authors: Chris Gotwalt (JMP Division of SAS Institute)
    Primary area of focus / application: Modelling
    Keywords: REML, Linear mixed model, Generalized linear mixed model, Firth estimation, Variance components
    Submitted at 21-Apr-2017 15:56 by Chris Gotwalt
    11-Sep-2017 16:20 What is REML and Why Does It Work?
    Estimating the parameters of the linear mixed model via Restricted Maximum Likelihood (REML) is one of the most widely used approaches to statistical estimation. The is because years of experience, gained over a broad array of applications from quality control to clinical trials, have shown that REML estimates of variance parameters often have lower bias than Maximum Likelihood (ML) estimates. However, outside of balanced cases, the reason for this superiority of REML remains a little mysterious. This presentation starts with a brief review of some of the classic motivations for, and derivations of, REML and then presents a new derivation that demonstrates that the REML estimator for a variance component model is an instance of a Firth adjusted estimator. This new derivation is advantageous for several reasons. It makes clear for which types of correlation structures we can expect less bias from REML compared with ML. It also shows that there are common correlation structures to which REML is often applied, but for which the hope for bias improvement is not on a solid theoretical footing. Fortunately, in such cases, the derivation also makes clear how the likelihood estimating equation can be adjusted so that the resulting estimates still enjoy reduced bias compared with ML.
  • Modern Quality and Process Control Software Tools

    Authors: Chris Gotwalt (JMP Division of SAS Institute)
    Primary area of focus / application: Other: Software
    Keywords: Quality control, Six Sigma, Control Charts, Measurement systems analysis, Big Data, Industry 4.0
    Submitted at 21-Apr-2017 19:40 by Chris Gotwalt
    11-Sep-2017 12:50 Modern Quality and Process Control Software Tools
    Manufacturing quality and process control are essential analytical tools in essentially every industry from pharmaceuticals to semiconductor manufacturing. Software has been an integral part of this for many years and its importance is only increasing with the rise of Big Data, IoT, and Industry 4.0. The last several releases of JMP software, most recently with JMP 13, has seen a renewal of JMP's Quality and Process modules to make them more visual, interactive, and scale better to Big Data. In this software session with will demonstrate interactive root cause finding with JMP's Control Chart Builder, graphical methods for Measurement Systems Analysis, and also demonstrate Process Screening, a powerful new JMP platform intended for surfacing the most important quality issues when you have potentially thousands of process variables and millions of observations to sift through.
  • Digital Life: Statistics for Big Health Data

    Authors: Arnoldo Frigessi (University of Oslo)
    Primary area of focus / application: Other: Invited keynote
    Secondary area of focus / application: Modelling
    Keywords: eHealth, Models, Computational intensive inference, Cancer, Hospital acquired infections
    Submitted at 23-Apr-2017 13:06 by Arnoldo Frigessi
    13-Sep-2017 11:40 Closing Keynote: Arnoldo Frigessi on "Digital Life: Statistics for Big Health Data"
    Data have always played a fundamental role in discovery in bio-medicine. Statistics is one of the most important elements in the cocktail of sciences which form medicine. Most, almost all?, battles for better health and treatment have been won also because of statistical methodologies. Now, because of revolutionary technologies, which allow observing and measuring the super-minute at very high frequencies, data have reached a new quantitative and qualitative level in bio-medicine. Big health data (including genomics), eHealth (including electronic health journals) and mHealth (including the health apps on your smart phone) give enormous new possibilities to prevent, make diagnoses and treat. This is a lecture on Digital Life, its opportunities, challenges and dangers. I will focus on two particular themes: simulation models for personalised cancer treatment and patient safety monitoring in hospitals. Innovation in statistical methodology is needed to exploit in full big health data.

    This work is in collaboration with many colleagues and students, in particular Alvaro Köhn Luque, Xiaoran Lai, Chi Zhang, Magne Thoresen.