ENBIS-14 in Linz

21 – 25 September 2014; Johannes Kepler University, Linz, Austria Abstract submission: 23 January – 22 June 2014

My abstracts

 

The following abstracts have been accepted for this event:

  • Recent Developments in Gaussian Random Field Modeling and Their Application in Manufacturing

    Authors: Enrique del Castillo (Penn State University)
    Primary area of focus / application: Modelling
    Keywords: Gaussian random field, Spatial data, Gaussian markov random function
    Submitted at 19-May-2014 09:03 by Enrique del Castillo
    Accepted
    22-Sep-2014 09:15 Opening Keynote by Enrique del Castillo: Recent Developments in Gaussian Random Field Modeling and their Application in Manufacturing
    Gaussian Random Field (GRF) models have been widely used in Geostatistics and Machine Learning, and more recently, for modeling industrial metrology data. In this talk, we overview some recent work on GFR modeling and their application using manufacturing data. We first present a new type of GRF model used to fit parametric surface models to metrology data that are compatible with CAD (Computer Aided Design) models. This is achieved by considering the data as located on a non-Euclidean space (the surface of interest), and modeling spatial correlation as occurring as a function of geodesic distances on the surface as opposed to distances in the Euclidean space the surface is embedded in. We next address a more fundamental question in GRF modeling that also occurs in manufacturing given the very large datasets available obtained from non-contact sensors: the “big n” problem, i.e., how to fit a GFR model to a large dataset. Traditional methods for fitting a GRF model to spatial data via maximum likelihood (ML) require optimizing a highly non-convex function. In addition, iterative methods to solve the ML problem require O (n³) floating point operations per iteration. We present a new two-step GRF estimation procedure which first solves a convex l-1 regularized likelihood problem to fit a sparse precision (inverse covariance) matrix to the GRF model. This implies a Gaussian Markov Random Function (GMRF) approximation to the GRF, although we do not explicitly fit a GMRF. In a second step, we estimate the parameters of the GRF spatial covariance function by finding the covariance matrix that is closer in Frobenius norm to the precision matrix obtained in the first stage. We show how this second stage problem can be solved using a simple line search if an isotropic stationary covariance function is assumed. Numerical experiments on both synthetic and real data sets show improvements of one order of magnitude in mean square prediction error over competitor methods for a variety of spatial covariance models. The proposed approach can easily be parallelized to fit a GRF model to large data sets and can be easily modified to allow for non-isometric covariance functions. The talk will conclude with a description on the application of the new methods to the part-to-part control of metallic 3D printed parts.
  • Statistical Methods for Degradation Data with Dynamic Covariates and an Application to Outdoor Weathering Prediction

    Authors: William Q. Meeker (Iowa State University)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Reliability
    Keywords: Covariate process, Environmental conditions, Lifetime prediction, Organic coatings, System health monitoring, Usage history
    Submitted at 19-May-2014 09:06 by Kristina Lurz
    Accepted
    24-Sep-2014 12:00 Closing Keynote by William Q. Meeker: Statistical Methods for Degradation Data with Dynamic Covariates and an Application to Outdoor Weathering Prediction
    Degradation data provide a useful resource for obtaining reliability information for high reliability products and systems. In addition to product/system degradation measurements, it is common nowadays to dynamically record product/system usage and as well as other environmental variables such as load, temperature and humidity, which we refer to as dynamic covariate information. In this paper, we introduce a class of models for analyzing degradation data with dynamic covariate information. We use general path models with individual random effects to describe degradation paths and parametric models to describe the covariate process. Physically motivated models are proposed to estimate the effects of dynamic covariates on the degradation process. The unknown parameters in the degradation data model and covariate process model are estimated by using maximum likelihood. We also describe algorithms for computing the estimate of the lifetime distribution induced by the proposed degradation path model. The proposed methods are illustrated with an application for predicting the life of organic paints and coatings in a complicated dynamic environment (i.e., changing UV spectrum and intensity, temperature, and humidity).
  • Phase I Analysis of Autocorrelated Time Series Data

    Authors: Erdi Dasdemir (Hacettepe University), Christian Weiß (Helmut Schmidt University), Murat Caner Testik (Hacettepe University), Sven Knoth (Helmut Schmidt University)
    Primary area of focus / application: Process
    Secondary area of focus / application: Quality
    Keywords: Autocorrelation, AR (1) process, Control chart design, Parameter estimation, Phase I, Outliers, ARL
    Submitted at 20-May-2014 09:10 by Erdi Dasdemir
    Accepted (view paper)
    22-Sep-2014 11:05 Phase I Analysis of Autocorrelated Time Series Data
    In Statistical Process Control, there are two phases of control chart applications. In order to design a control chart for process monitoring, Phase I is the process of parameter estimation and outlier filtering until a reliable chart design is obtained. Since it is difficult to estimate parameters with incomplete observations (filtered outliers), Phase I becomes a challenging subject when observations are correlated. In this research, impact of autocorrelation during Phase I analysis, as well as the effect of Phase I analysis on the performance of Phase II implementations, i.e. use of the chart for online process monitoring are investigated. Simulation is used to generate autocorrelated observations from an AR (1) process. Several scenarios, that are combinations of different number of observations, autocorrelation parameter and outlier rates, are considered. Behavior of the maximum likelihood and conditional sum of squares estimators under these scenarios is compared and their effects on average run length (ARL) performance are investigated.
  • Using Multiblock PLS Models and Optimization for Rapidly Developing New Product Formulations

    Authors: John F. MacGregor (McMaster University and ProSensus, Inc.)
    Primary area of focus / application: Process
    Secondary area of focus / application: Modelling
    Keywords: Product development , Latent variables, DoE approaches for optimally augmenting databases, Optimization
    Submitted at 26-May-2014 23:13 by John MacGregor
    Accepted (view paper)
    24-Sep-2014 09:00 Using Multiblock PLS Models and Optimization for Rapidly Developing New Product Formulations
    The continual development of products with improved properties and of new products for different application areas is an important area for most chemical, material and food companies. There are basically 3 general degrees of freedom in developing a new product: (i) the choice of raw materials or ingredients, (ii) the ratios in which to combine or react the materials (formulation or recipe), and (iii) the process conditions under which to manufacture the product. These three degrees of freedom are highly interactive, making it important to consider all of them simultaneously.

    In this presentation we discuss multivariate latent variable analysis methods for extracting information from the development databases, multivariate DoE approaches for optimally augmenting the databases, and optimization methods for rapidly formulating products that meet all the desired quality specifications at minimum cost.

    Some examples of using this approach include the development of functional polymers for medical devices and for the core materials used in the very popular Srixon golf balls (now used by a large number of the tour pros), the development of novel high performance polymeric coatings, and the development of new food formulations.
  • Measuring and Controlling Operator Differences in Data from a Skilled Work Process

    Authors: Froydis Bjerke (Animalia Meat and Poultry Research Centre)
    Primary area of focus / application: Quality
    Secondary area of focus / application: Process
    Keywords: Linear regression, Graphical displays, Sources of variance, Meat processing, Calibration of workers
    Submitted at 27-May-2014 14:23 by Froydis Bjerke
    Accepted (view paper)
    24-Sep-2014 10:15 Measuring and Controlling Operator Differences in Data from a Skilled Work Process
    Meat deboning and cutting is skilled labour, and specifications - de facto national standards -for prime cuts and trimmings do exist. However, operators also make use of their acquired experience and views in the process of separating meat from bones into prime meat cuts, trimmings, fat and sinews, thus contributing to the variance in the related meat weights and measures.

    In Animalia’s pilot plant, experienced operators produce data of cut yield (weights) and quality (measures) from individual carcasses by following specified cutting patterns. Throughout, the operators’ ID is also registered, so the data can be used to estimate individual but unwanted differences between operators. Ideally, operators should not contribute to explained variance in a yield model. However, when they do, the differences can be measured and corrective actions taken.

    Eyield= b0 +b1inweight+b2class+b3fatgroup+dj(operator j)
    Linear regression models are used to predict yield of prime cuts and trimmings from carcass parts, and ways of quantifying and dealing with operator differences are presented. Choice of response variables, that is, absolute vs. relative (meat to carcass) weight variables in the models are discussed, bearing in mind that the raw material (animal carcasses) is subject to biological variation, and that carcass assessments (class, fat group) are uncertain variables, too.
  • Designing Sequential Batch Experiments with Application to Pharmaceutical Processes

    Authors: Verity Fisher (University of Southampton)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Modelling
    Keywords: Nonparametric modelling, Gaussian process modelling, Expected improvement, Optimisation
    Submitted at 28-May-2014 10:22 by Verity Fisher
    Accepted (view paper)
    22-Sep-2014 14:50 Designing Sequential Batch Experiments with Application to Pharmaceutical Processes
    Design of experiments is often used in process optimisation to save on resources, cost and time. Standard response surface methodology fits polynomial response models to data from a sequence of experiments to locate the optimum. However, in situations where the response is more complex and mechanistic understanding is limited, for example as arises in many pharmaceutical processes, a more flexible modelling approach is required. We demonstrate how Gaussian process models and efficient global optimisation (EGO) algorithms, commonly used in computer experiments, can be employed in such situations, illustrating their use through a drug production example provided by GlaxoSmithKline.

    In the pharmaceutical industry, investigation using automated equipment is often carried out in sequential batches of experiments. Therefore, we present a batch-sequential extension to EGO. Comparisons are made to one-point-at-a-time sequential designs.