ENBIS-13 in Ankara

15 – 19 September 2013 Abstract submission: 5 February – 5 June 2013

My abstracts

 

The following abstracts have been accepted for this event:

  • Teaching Exploratory Data Analysis by Using the Data from the Red Bead Game

    Authors: Vladimir Shper (Moscow Institute of Steel & Alloys), Yuri Adler (Moscow Institute of Steel & Alloys), Elena Hunuzidi (Moscow Institute of Steel & Alloys), Olga Maximova (Moscow Power Institute)
    Primary area of focus / application: Education & Thinking
    Keywords: Teaching,, statistical, methods,, exploratory, data, analysis
    Submitted at 26-May-2013 11:31 by Vladimir Shper
    Accepted (view paper)
    18-Sep-2013 09:00 Teaching Exploratory Data Analysis by Using the Data from the Red Bead Game
    The well-known Red Bead Game (RBG) popularized by Dr. Deming many years ago is being used worldwide as a good start for understanding the basic principles of statistical thinking by managers, workers, students etc. At the ENBIS9 conference in Goteborg we shared our experience of using the expanded version of RBG for more thoughtful insight into such important ideas as different phases of control charting and the influence of uniformity of data on the limits of Shewhart chart. Now we would like to share our experience of using the expanded RBG data for teaching students many useful statistical methods applied widely for Exploratory Data Analysis such as stem-and-leaf, box-and whisker, discrete and continuous distributions and their approximations, tests for normality, non-normal data analysis, etc. The examples of data are presented as well as discussion about the significance of this approach for teaching students.
  • The Statistics: Past, Present, and Future (A New Paradigm in Statistics)

    Authors: Yury Adler (Moscow Institute of Steel & Alloys)
    Primary area of focus / application: Education & Thinking
    Keywords: paradigm, shift, statistics, future
    Submitted at 26-May-2013 11:35 by Vladimir Shper
    Accepted (view paper)
    17-Sep-2013 17:30 The Statistics: Past, Present, and Future (A New Paradigm in Statistics)
    1. From time to time in development any phenomenon of human activity we can see change of paradigm. Such moment is going in area of statistics.
    2. New paradigm leads to new rules and new behavior.
    3. Teamwork together with experts from different areas on the real projects.
    4. Mainstream – operational definitions versus axiomatic.
    5. Visualization versus formalization.
    6. Design of experiments as a base of statistics.
    7. Data mining as a way to computerization.
    8. Statistics as the language of science. (K. Pearson, G. Taguchi, H. Tsubaki, V. Nalimov).
    9. New approach to education and training.
    10. Statistics as a part of common culture. (H. Wells).
  • Can a Correction True Value be Better Known than the Measurand True Value?

    Authors: Franco Pavese (Consultant)
    Primary area of focus / application: Metrology & measurement systems analysis
    Keywords: correction, true value, measurement, uncertainty, systematic error, systematic effects, GUM
    Submitted at 28-May-2013 09:22 by Franco Pavese
    Accepted
    17-Sep-2013 09:00 Can a Correction True Value be Better Known than the Measurand True Value?
    Since the beginning of the history of modern measurement science, the experimenters faced the problem of dealing with systematic effects, as distinct from, and opposed to, random effects. Two main schools of thinking stemmed from the empirical and theoretical exploration of the problem, one dictating that the two species should be kept and reported separately, the other indicating ways to combine the two species into a single numerical value for the total uncertainty (often indicated as ‘error’). The second way of thinking, generally, adopts the method of assuming that their expected value is null by requiring, for all systematic effects taken into account in the model, that corresponding ‘corrections’ are applied to the measured values before the uncertainty analysis is performed. On the other hand, about the value of the measurand intended to be the object of measurement, classical statistics calls it ‘true value’, admitting that a value should exist objectively (e.g. the value of a fundamental constant), and that any experimental operation aims at obtaining an ideally exact measure of it. However, due to the uncertainty affecting every measurement process, this goal can be attained only approximately, in the sense that nobody can ever know exactly how much any measured value differs from the true value. The paper discusses the credibility of the numerical value attributed to an estimated correction, compared with the credibility of the estimate of the location of the true value, concluding, contrarily to GUM, that the true value of a correction should be considered as imprecisely evaluable as the true value of any ‘input quantity’, and of the measurand itself.

    [1] F. Pavese, On the difference of meaning of ’zero correction’: zero value versus no correction, and of the associated uncertainties, in Advanced Mathematical and Computational Tools in Metrology and Testing IX (F. Pavese et al., Eds), vol. 9, World Scientific, Singapore, pp. 297–309, 2012.

    [2] F. Pavese, An introduction to data modeling principles in metrology and testing, in Advances in data modeling for measurements in metrology and testing, Birkhauser-Springer, Boston, 2009, 1–30.
  • Bayesian Estimation of Demand and Substitution Rates with Two Products

    Authors: Ulku Gurler (Bilkent University), Nilgun Ferhatosmanoglu (THK University), Emre Berk (Bilkent University)
    Primary area of focus / application: Other: ISBA
    Keywords: Bayes estimation, Demand rate, Substitution, Poisson arrival
    Submitted at 28-May-2013 16:01 by ulku gurler
    Accepted
    18-Sep-2013 11:15 Bayesian Estimation of Demand and Substitution Rates with Two Products
    Accurate estimation of demand rates is essential for timely satisfaction of customer demands and the profitability of the business. Despite being a fundamental issue in production and inventory management, estimation remains as a challenging problem, especially as the product lifecycles become shorter, customer behavior become more complicated and substitution opportunities increase with product variety.
    In this study we consider the joint estimation estimation of the demand arrivals, primary and substitute demand rates in a context of two products using a Bayesian estimation methodology. It is assumed that the demand for the products follows a Poisson process where the unknown arrival rate is a random variable.
    Under a general prior distribution settings we obtain the joint posterior distributions for the demand and substitution rates. We also consider two commonly encountered special cases and provide some numerical results to illustrate the performance of the estimators.
  • Optimal Design of a Machine Part Using Design of Computer Experiments

    Authors: Jan-Willem Bikker (CQM)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Design of Computer Experiments, Transfer functions, Optimization, Design for Six Sigma, Simulation
    Submitted at 29-May-2013 08:30 by Jan-Willem Bikker
    Accepted
    16-Sep-2013 11:00 Optimal Design of a Machine Part Using Design of Computer Experiments
    The Dutch consultancy firm CQM supports many R&D departments with the development of products using quantitative methods, providing fact-based decisions, using mathematical modeling and statistics. One example of the many projects is the development of a machine part, for which a simulation tool exists, giving estimates for the properties of the machine part. A simulation run takes a few hours though. The set of the design parameters, geometric in nature, has as many as 11 dimensions, as well as various geometry-implied constraints. The knowledge and experience of the developers was carefully examined in the formulation of the parameters and their constraints. By running the simulations in batch-mode according a pre-generated DoCE scheme, we could evaluate the design. The results had a complex format (pictures, bin-frequencies) and were summarized into 14 “response parameters”. We developed functions describing how the response parameters depend on the design parameters. On those response parameters, several constraints apply in order for the machine part to work. We developed a tool to visualize the design, model outcomes, and satisfaction of constraints. Furthermore, the models allow finding “optimal” and feasible designs. The usefulness of this model-based approach is that in several iterations in the project, the development team could ask us to play around with the constraints and objective, thus getting much insight in the possibilities, resulting in a good and validated design they would otherwise never have considered.
  • Burn-in: Assessing Early Life Failure Probabilities with Implemented Countermeasures

    Authors: Daniel Kurz (Department of Statistics, Alpen-Adria University of Klagenfurt), Horst Lewitschnig (Infineon Technologies Austria AG, Villach), Jürgen Pilz (Department of Statistics, Alpen-Adria University of Klagenfurt)
    Primary area of focus / application: Reliability
    Keywords: Bayes, burn-in, Clopper-Pearson, countermeasure, decision theory, Generalized Binomial distribution, reliability
    Submitted at 29-May-2013 08:41 by Daniel Kurz
    Accepted
    18-Sep-2013 10:35 Burn-in: Assessing Early Life Failure Probabilities with Implemented Countermeasures
    Industrial applications show a steadily increasing demand in terms of quality and reliability. Several qualification tests are implemented in order to verify the products' long term stability.

    Semiconductor devices show an increased failure rate at the beginning of their life. Different procedures are applied for keeping this early life failure rate (ELFR) level at a minimum. However, in case of novel technologies, new production lines etc., the actual ELFR needs to be evaluated. This is done by stressing a sample of the produced devices for production-relevant failures - referred to as burn-in study. Based on the number of occurred failures, the early life failure probability p (as parameter of a Binomial distribution) is assessed applying Clopper-Pearson interval estimation and more general Bayesian estimation techniques, respectively.

    In general, zero failures are required in order to meet the ppm-requirements and to successfully complete a burn-in study. Once a fail occurs, the burn-in study actually has to be restarted.

    More efficiently, a countermeasure is implemented in the production process and its effectiveness is assessed by experts. If the countermeasure is 100% effective, the fail would not have been occurred if the countermeasure would have already been implemented before the burn-in study. Therefore, it is not counted in the burn-in statistics. If the countermeasure is less than 100% effective, there is just a certain probability that the fail would have been observed.

    We propose a statistical model for assessing early life failure probabilities taking account of the effectiveness of the implemented countermeasure. The idea is to weight each fail according to its occurrence probability after extending the countermeasure to the process giving an advanced estimation concept for early life failure probabilities.

    The model can be flexibly applied for a general number of failures and countermeasures, as well as for uncertain proportions of effectiveness. This requires the application of generalized forms of the common Binomial distribution. Moreover, a Bayesian version of the proposed model offers the possibility of integrating further expert knowledge on the failure probability.

    For reasons of justification, the model is further discussed with regard to its decision-theoretical background. The loss function, risk function and decision strategies are adapted to failures which are tackled by countermeasures. Optimality of the proposed model can be demonstrated exploiting Bayesian decision principles.

    The benefit of the provided model is that the improvement measures in the chip production process are reflected in the early life failure probability without re-starting the burn-in study. Thus, the ppm-target can be proven by burning an additional number of items for zero fails. Since burn-in is expensive in terms of costs, time and resources, this leads to an essential improvement of burn-in efficiency.

    Acknowledgement:
    The work has been performed in the project EPT300, co-funded by grants from Austria, Germany, Italy, The Netherlands and the ENIAC Joint Undertaking.