ENBIS-7 in Dortmund

24 – 26 September 2007

My abstracts


The following abstracts have been accepted for this event:

  • The Statistical Efficiency Conjecture

    Authors: Ron Kenett (1), Anne De Frenne (2), Xavier Tort-Martorell (3), Christopher McCollin (4)
    Primary area of focus / application:
    Submitted at 31-Aug-2007 10:12 by
    In this work we attempt to demonstrate the impact of statistical methods on process and product improvements and the competitive position of organisations. We describe a systematic approach for the evaluation of benefits from process improvement and quality by design and develop and validate the Statistical Efficiency Conjecture that links management maturity with the impact level of problem solving and improvements driven by statistical methods.

    The different approaches to the management of industrial organizations can be summarised and classified using a four steps Quality Ladder [Kenett and Zacks, 1998]. The four approaches are 1) Fire Fighting, 2) Inspection, 3) Process Control and 4) Quality by Design and Strategic management. To each management approach, corresponds a particular set of statistical methods and the Quality Ladder maps each management approach to corresponding statistical methods.

    Efficient implementation of statistical methods requires a proper match between management approach and statistical tools. We demonstrate, with 21 case studies, the benefits achieved by organisations from process and quality improvements. The underlying theory behind the approach is that organisations that increase the maturity of their management system, moving from fire fighting to quality by design, enjoy increased benefits and significant improvements in their competitive positions.

    Keywords: Improvement projects, Six Sigma, DMAIC, quality by design, robust design, practical statistical efficiency, Quality Ladder, management consulting.


    (1) KPA Ltd., Raanana, Israel and University of Torino, Torino, Italy, ron@kpa.co.il
    (2) Math -X sprl, Brussels, Belgium
    (3) Technica University of Catalonia (UPC), Barcelona, Spain
    (4) The Nottingham Trent University, UK

  • Validating Clinical trials protocols with Simulations

    Authors: Tony Greenfield (1), Ron Kenett (2)
    Primary area of focus / application:
    Submitted at 31-Aug-2007 10:17 by
    Clinical trials are on the critical path of drug and treatment development. They are expensive in time as well as in money. A clinical trial is essential before any new, and perhaps revolutionary, product can reach the market. The trial protocol is a statement of the design of the clinical trial and how it will be managed, and how a multitude of assumptions will be tested, empirically. The trial will determine if the proposed treatment is actually doing what its sponsors claim it can achieve.

    Clinical trials raise complex statistical and ethical issues. A clinical trial that is not properly designed statistically, for example with very low power, can be considered unethical. But an over-designed trial, which lasts a long time and involves too many patients, is also unethical. The former may fail to show that a drug is more effective than its comparator, so patients will have been submitted to a trial with little hope of a useful result. The latter will require some patients to continue receiving the less effective treatment longer than necessary and it will delay the marketing of the more effective drug.

    Protocols of clinical trials are traditionally designed by medical experts with the help of statisticians. The main role of a statistician has typically been to determine sample sizes. However, the evaluation of the trial strategy involves many parameters not addressed by simple power calculations based on t-tests or ANOVA.

    In this work we describe how, using specially designed simulations, we can evaluate a clinical trial protocol and assess the impact of various assumptions such as drop out rates, patient presentation rates, compliance, treatment effects, end point dependencies, exclusion criteria and distributions of population and response variables. The evaluation will focus on the overall power of the trial to detect clinically significant differences and its cost. We demonstrate the approach with a case study.


    (1) Greenfield Research, UK.
    (2) KPA Ltd., Israel and University of Torino, Torino, Italy

  • Bayesian versus non-Bayesian design of choice experiments in marketing

    Authors: Peter Goos, Roselinde Kessels, Bradley Jones, Martina Vandebroek
    Primary area of focus / application:
    Submitted at 31-Aug-2007 11:31 by
    In the research marketing and statistics literature, the optimal design of choice-based conjoint studies has received a lot of attention. Roughly speaking, two lines of research can be identified. One line of research focuses on the computationally intensive construction of Bayesian optimal designs for choice experiments. Another line uses combinatorial insights and a simplifying assumption about the parameters of the multinomial logit model to construct optimal designs. The purpose of this presentation is to provide a detailed simulation-based comparison of the two approaches. The comparison will focus on the precision of the estimation and the prediction and provide substantial support in favor of the Bayesian approach. As this approach is computationally intensive, we will also discuss a fast algorithm for computing the Bayesian optimal designs.
  • Statistical consultancy. What's in it for me?

    Authors: Roland Caulcutt
    Primary area of focus / application:
    Submitted at 31-Aug-2007 20:22 by
    Our consultancy clients will usually require advice or guidance on the collection, analysis and interpretation of data. The statistician is well equipped to provide this advice, based on his/her deep understanding of statistical theory and practice. But, what else might the client hope to gain from the consultancy interaction? What does the statistician hope to achieve? How about the other interested parties; what’s in it for them?

    This presentation will discuss the psychological needs of all the stakeholders in the statistical consultancy interaction. If the statistician does not respond to these needs, he/she may experience disappointment and greatly reduced effectiveness. How, then, should the consultant operate in order to increase the probability of success, in an environment where each participant may be wondering “What’s in it for me?”.
  • Bayesian Network in Customer Satisfaction Survey

    Authors: Silvia Salini (1), Ron Kenett (2)
    Primary area of focus / application:
    Submitted at 2-Sep-2007 14:38 by
    A Bayesian Network is a probabilistic graphical model that represents a set of
    variables and their probabilistic dependencies. Formally, Bayesian Networks are
    directed acyclic graphs whose nodes represent variables, and whose arcs encode the
    conditional dependencies between the variables. Nodes can represent any kind of
    variable, be it a measured parameter, a latent variable or a hypothesis. They are
    not restricted to representing random variables, which forms the "Bayesian" aspect
    of a Bayesian network. Efficient algorithms exist that perform inference and
    learning in Bayesian Networks. Bayesian Networks that model sequences of variables
    are called
    Dynamic Bayesian Networks. Harel et. al (2007) provide a comparison between Markov
    Chains and Bayesian Networks in the analysis of web usability from e-commerce data.
    A comparison of regression models, SEMs, and Bayesian networks is presented Anderson
    et. al (2004).

    In this paper we apply Bayesian Networks to the analysis of Customer Satisfaction
    Surveys and we demonstrate the potential of the approach. Bayesian Networks offer
    advantages in implementing managerially focused models over other statistical
    techniques designed primarily for evaluating theoretical models. These advantages
    are providing a causal explanation using observable variables within a single
    multivariate model and analysis of nonlinear relationships contained in ordinal
    measurements. Other advantages include the ability to conduct probabilistic
    inference for prediction and diagnostics with an output metric that can be
    understood by managers and academics.


    (1) Department of Economics, Business and Statistics
    University of Milan, Italy
    (2) KPA Ltd., Israel and University of Torino, Torino, Italy

  • New Adaptive EWMA Control Charts

    Authors: Seiichi YASUI, Yoshikazu OJIMA (Tokyo University of Science, Japan)
    Primary area of focus / application:
    Submitted at 3-Sep-2007 03:03 by
    The exponential weighted moving average (EWMA) control charts are more powerful
    for detecting small shifts than the Shewhart type control charts. Furthermore, the
    average time to detecting shifts can be shorter, if the sampling interval and/or the
    sample size is changed depending on the value of statistics is applied to an EWMA
    control chart. In an EWMA control chart, the plotted statistic is the weighted
    average of the previous plotted statistic and the current observation, hence, the
    weight can also be changed depending on the value of plotted statistics. In this
    study, the adaptive procedure for the weight in an EWMA control chart is proposed.
    The proposed adaptive EWMA control chart has warning limits and control limits. If
    the plotted statistic exceeds the warning limit, weighting is changed. We evaluate
    the performance for detecting several out-of-control situations through Monte Carlo
    method. The adaptive EWMA control is more powerful for detecting small shift than
    the traditional EWMA control chart.