ENBIS-14 in Linz

21 – 25 September 2014; Johannes Kepler University, Linz, Austria Abstract submission: 23 January – 22 June 2014

My abstracts

 

The following abstracts have been accepted for this event:

  • Prediction for Stochastic Growth Processes with Jumps

    Authors: Katja Ickstadt (TU Dortmund University), Simone Hermann (TU Dortmund University), Christine Mueller (TU Dortmund University)
    Primary area of focus / application: Modelling
    Keywords: Constructional engineering, Crack width curve, Stochastic process model, Bayesian analysis, Prediction
    Submitted at 27-Feb-2014 16:48 by Katja Ickstadt
    Accepted
    22-Sep-2014 11:05 Prediction for Stochastic Growth Processes with Jumps
    In constructional engineering, experiments of material fatigue are expensive and, therefore, seldom. In our research project on Statistical methods for damage processes under cyclic load of the Collaborative Research Centre 823 from the TU Dortmund University, the engineers conducted an experiment, in which they set several pre-stressed concrete beams under cyclic load, starting with initial cracks. The observed crack widths exhibit irregular jumps with increasing frequency which influence the growth process substantially. The aim is to find a stochastic model describing the process in order to predict the development of the crack width curve.

    We propose a stochastic process defined by a stochastic differential equation. This process includes a deterministic trend, an error term with autoregressive variance, and an additive jump term, compounded of a Poisson process with increasing frequency rate and autoregressive jump heights. The predictive distribution will be presented and approximated simulating from the posterior distributions resulting from a corresponding MCMC algorithm, once for the Poisson process and, in turn, for the whole stochastic process conditional on the Poisson process.
  • Approximate Model Spaces for Model-Robust Experiment Design

    Authors: Byran Smucker (Miami University (Ohio)), Nathan Drew (Epsilon)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Model-robust, Two-level designs, Approximate model space, D-optimality, Exchange algorithm
    Submitted at 10-Mar-2014 21:16 by Byran Smucker
    Accepted
    23-Sep-2014 16:00 Approximate Model Spaces for Model-Robust Experiment Design
    Fractional factorial designs are commonly used to estimate main effects and two-factor interactions. However, estimation properties can be improved and/or the number of runs reduced if the experimenter assumes some level of effect sparsity and approaches the problem in terms of model-robustness. That is, the experimenter specifies the set of all models which include the main effects and a chosen number of two-factor interactions. Then, a design is constructed that allows estimation of each model in this set with as much precision as possible. Though this approach is highly effective compared to standard designs, the size of the model sets can easily grow so large that it is difficult or computationally infeasible to construct the model-robust designs. In this talk, we present a method that allows designs to be constructed that are robust for large model sets while requiring only a small fraction of the computational effort. This is accomplished by choosing a small subset from the full space of models and constructing a design that is robust to this subset. Though the resulting designs can be constructed relatively quickly, they have estimation properties that are quite competitive with designs that are obtained by utilizing the entire model space. More generally, this approach can be applied to any experimental scenario for which a set of models of interest is supplied.
  • An Application of Zellner’s g Prior to Reliability Growth Models

    Authors: Nikolaus Haselgruber (CIS Consulting in Industrial Statistics GmbH)
    Primary area of focus / application: Reliability
    Secondary area of focus / application: Modelling
    Keywords: Reliability growth, Stochastic model, Zellner's g prior, FRACAS
    Submitted at 13-Mar-2014 10:38 by Nikolaus Haselgruber
    Accepted (view paper)
    22-Sep-2014 11:05 An Application of Zellner’s g Prior to Reliability Growth Models
    In a product development process, one of the last steps before market release is reliability growth testing. During this period, the tested units are monitored precisely and observed failures are eliminated systematically, guided by defined procedures (e.g., FRACAS = Failure Reporting, Analysis and Corrective Action System). To describe the effect of the failure elimination activities over test time, reliability growth models usually are applied. Here, a major challenge is that on the one hand, the duration of reliability tests is particularly long as it covers a substantial portion of the intended product life time while on the other hand management urges to see results as soon as tests are started.
    To provide a reliability prediction soon after start of testing and simultaneously prevent of overfitting by consideration of existing information, this work presents the application of Zellner’s g prior function to reliability growth models.
  • Synthetic Control Charts --- Panacea or Placebo

    Authors: Sven Knoth (Helmut Schmidt University, University of the Federal Armed Forces Hamburg)
    Primary area of focus / application: Process
    Secondary area of focus / application: Quality
    Keywords: Conditional and cyclical steady-state ARL, Conditional expected delay, Markov chain, Numerical solution of integral equations, Runs rules, Zero-state ARL
    Submitted at 26-Mar-2014 14:20 by Sven Knoth
    Accepted
    22-Sep-2014 10:45 Synthetic Control Charts --- Panacea or Placebo
    The synthetic chart principle proposed by Wu and Spedding (2000) -- it is basically
    an extension of the Conforming Run Length (CRL) chart introduced by Bourke (1991) -- initiated a stream of publications in the control charting literature. Originally, it was claimed that the new chart has superior Average Run Length (ARL) properties. Davis and Woodall (2002) indicated that the synthetic chart is nothing else than a runs rule chart -- Klein (2000) recommended something similar, but much more simple. Moreover, they criticized the design of the performance evaluation and advocated to use the steady-state ARL. The latter measure was used then, e.g., in Wu et al. (2010). In most of the papers on synthetic charts that actually used the steady-state framework it was not rigorously described -- see Khoo et al. (2011) as an exception, where it was mentioned that the cyclical steady-state design was considered. The aim of the talk is to present a careful steady-state analysis (cyclical and the more popular conditional) for the synthetic chart, the original 2 of L+1 (L>=1) runs rule chart, and competing two-sided EWMA charts with different types of control limits. For the EWMA chart some enlightening new results for the cyclical steady-state ARL are obtained. Finally it turns out that the EWMA chart has a uniformly (over a large range of potential shifts) better steady-state ARL performance than the synthetic chart.
  • Effect of Average Cycle Time and Cycle Time Variation on Inventory, Asset Utilization and Throughput for a Multi-Station Manufacturing Channel

    Authors: Bryan Dodson (SKF Group Six Sigma), Ivan Boychinov (SKF Bearings Bulgaria EAD), Rene Klerx (SKF Group Six Sigma)
    Primary area of focus / application: Business
    Secondary area of focus / application: Six Sigma
    Keywords: Optimization, Inventory, Assest utilization, Cycle time, Financial optimization
    Submitted at 28-Mar-2014 16:13 by Rene Klerx
    Accepted (view paper)
    22-Sep-2014 11:55 Effect of Average Cycle Time and Cycle Time Variation on Inventory, Asset Utilization and Throughput for a Multi-Station Manufacturing Channel
    As the cycle time becomes equal for a channel with multiple stations, downtime from lack of parts grows dramatically unless there is sufficient inventory to prevent the station with the lowest cycle time from being starved. A common manufacturing metric is asset utilization, and it is understandable for factory managers to desire expensive machinery to have high utilization. A financial optimization can be reached by balancing both the asset utilization and the inventory. As variability in cycle time increases, the problem is exacerbated. Financial optimization of this problem along with the impact of the level of variation will be demonstrated.
  • Business and Academic Synergy in the World of Big Data

    Authors: Shirley Coleman (ISRU, Newcastle University), Andrea Ahlemeyer-Stubbe (Draftfcb München GmbH), Dave Stewardson (ENBIS)
    Primary area of focus / application: Mining
    Secondary area of focus / application: Education & Thinking
    Keywords: Real-data, Bioinformatics, Workshops, Cloud-computing, Doctoral, PhD
    Submitted at 30-Mar-2014 18:18 by Shirley Coleman
    Accepted
    24-Sep-2014 10:15 Business and Academic Synergy in the World of Big Data
    Big data is an opportunity for statisticians as more and more public interest focuses on data and expectations rise as to what can be done. In this environment it is likely that some data owners will be disappointed; they may find that they have to spend more time than they wish on preparing their data for analysis; they may find disappointing levels of signal to noise or they may find that the messages buried in the data are rather obvious and uninteresting. Nevertheless the quality of the data is not the only drawback; a main issue is that we have to develop new ways to deal with big data. As Chris Anderson, Editor in Chief of Wired, says “one of the challenges is that companies haven’t hired enough statisticians and analysts, and have been too timid when it comes to using imperfect data”. Newcastle University School of Maths and Stats has been awarded funding to support 11 PhD doctoral students every year for 5 years in a joint Centre for Doctoral Training with the School of Computer Science entitled Cloud Computing for Big Data. Research will address issues in big data analytics, bioinformatics, time series and Bayesian analysis. A key feature of this project is to expose students to real industrial and business data. The Industrial Statistics Research Unit (ISRU)’s involvement will be in establishing projects, scoping, planning, facilitating interactions with companies, ensuring useful outcomes and providing advice and support through workshops and mentoring. This paper reports on the range of clients offering datasets, their expectations and the planned activities for the first batch of students in 2014. Through innovative programmes such as this business and academia can promote each other and produce outcomes that are greater than either could produce alone.