ENBIS-14 in Linz

21 – 25 September 2014; Johannes Kepler University, Linz, Austria Abstract submission: 23 January – 22 June 2014

My abstracts

 

The following abstracts have been accepted for this event:

  • Framework for Analyzing Malpractice Cases to Improve Healthcare Systems

    Authors: Shuki Dror (ORT Braude College), Margol, Dina (ORT Braude College)
    Primary area of focus / application: Quality
    Keywords: Healthcare, Malpractice, Medical, Errors, Pareto, MSE, QFD
    Submitted at 9-Feb-2014 10:51 by Shuki Dror
    Accepted
    22-Sep-2014 14:30 Framework for Analyzing Malpractice Cases to Improve Healthcare Systems
    This study develops a framework for analyzing the judgments handed down by courts in
    malpractice cases. We analyzed 215 cases that included awards for damages caused by
    medical negligence, handled by Israeli courts from 2005 to 2011. The Pareto principle coupled
    with the Mean Square Error (MSE) criterion supports the selection of the vital hospital
    departments and the vital causes of claims. A Quality Function Deployment (QFD) matrix is
    used to translate the desired improvement in malpractice costs into relevant medical decisions
    and diagnostic tests.
    Based on the analysis we can conclude that the essential part of all malpractice claims
    submitted to and found legitimate by the courts was related to the obstetrics field. We reveal
    that most claims have elements in common. After calculating the total amount of money paid to
    victims by hospitals, based on court data, we found that the group of vital causes decreased.
    In this study we analyze claims related to the obstetrics department. Other departments where
    errors are frequently made should be addressed in turn, with the objective of continuously
    improving the medical system.
  • Managing Models - A New Challenge while Benefiting from Big Data

    Authors: Marc Anger (StatSoft (Europe) GmbH)
    Primary area of focus / application: Mining
    Secondary area of focus / application: Modelling
    Keywords: Big data, Model factory, Automation, Rules, Monitoring, Process, Predictive analytics
    Submitted at 14-Feb-2014 16:27 by Marc Anger
    Accepted (view paper)
    24-Sep-2014 10:35 Managing Models - A New Challenge while Benefiting from Big Data
    Big data is not about storing or handling large volumes of information or crunching numbers. It is about benefiting from data. So what will happen is that we will have hundreds of very precise predictive models for hundreds of customer segments instead of a single, more general model for all customers.
    This introduces the task of managing all these models in a reasonable way. This lecture is a kind of a recipe how to implement a model factory as true production environment.
    Mayor aspects are: how to build and evaluate models in parallel, how to deploy them in an automatic way, how to monitor the performance and if necessary how to retrain them as new candidates or even switch to another method and after all that change to a better new model. We need alerting in addition to monitoring and also tracked versioning of models and even of the individual scores. How can we move all those models from development to test and finally to production? We will also need different levels of approving and detailed rights for users in the analytic team. We even need business rules to override a model decision or to switch between models depending on the very single set of data.
    The lecture gives an overview on what to do and how to do. To visualize and support the lecture we will see workflows, red/yellow/green dashboards, quality control charts etc. produced with the STATISTICA platform. For sure, do not expect a software training or feature presentation, please. If you are in need of this, feel free to contact the speaker.
  • Teaching Design of Experiments: A 25 Years Retrospective

    Authors: Ron Kenett (KPA Ltd.), David Steinberg (Tel Aviv University)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Education & Thinking
    Keywords: Design of Experiments, Physical experiments, Computer experiments, Statistical education, Lessons learned
    Submitted at 17-Feb-2014 22:51 by Ron Kenett
    Accepted
    22-Sep-2014 11:55 Teaching Design of Experiments: A 25 Years Retrospective
    In 1987 we published, in the Journal of Applied Statistics, an article titled: Some Experiences Teaching Factorial Designs in Introductory Statistics Courses. The paper described the application of hands on applied projects were students are engaged in designing, conducting and analysing small experiments. Our overriding goal was to convey, in a class set up, how statistics is used to solve real problems. In this work, we provide an updated perspective on lessons learned in trying to achieve this same goal by teaching the design of experiments (DoE) in academic and industrial settings. The paper reviews various approaches to teach DoE moving from theoretical, mathematical based approaches, to hands on practical involvement of students in physical experiments and, eventually, in simulation based computer experiments. We discuss some pedagogical concerns and practical experience gained in such transitions and we sketch some possible future directions.
  • A Two Stage Approach to Mapping Extreme Non-Hurricane Wind Speeds over the Contiguous United States

    Authors: Adam L. Pintar (National Institute of Standards and Technology), Emil Simiu (National Institute of Standards and Technology)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Reliability
    Keywords: Extreme winds, Extreme values model, Local regression, Isotach maps, United States
    Submitted at 24-Feb-2014 17:04 by Adam Pintar
    Accepted
    23-Sep-2014 16:20 A Two Stage Approach to Mapping Extreme Non-Hurricane Wind Speeds over the Contiguous United States
    A structure must be designed to withstand wind loads of an appropriate magnitude. That magnitude depends on several factors, one of which is the estimated direction-independent speed with an appropriate mean recurrence interval (MRI). For example, a wind speed with a ten year MRI is the wind speed such that the probability of a gust of that magnitude or greater in a year is 0.1. The goal of the data analysis described in this presentation is two fold. The first goal is to create isotach maps of the contiguous United States. The second goal is providing numerical versions of the maps as well as software for performing automatic interpolation for user specified coordinates and MRIs. The maps and software are stored as an R package for easy use and distribution. The available data are time histories of wind speeds at stations located throughout the United States. Each observation is categorized as either a thunderstorm or non-thunderstorm event. The time history
    lengths vary from less than five years to nearly forty years. The maps are created in two stages. An extreme value model is first fitted to the data from each station. The resulting models are used to estimate wind speeds for any MRI of interest at each station. The wind speeds at individual stations are then smoothed using local regression to create the maps. Quantification of uncertainty is possible using the fitted local regression model.
  • A Feedback on Three Years of Big Data Analysis in France

    Authors: Michel Lutz (OCTO Technology), Thomas Vial (OCTO Technology)
    Primary area of focus / application: Mining
    Keywords: Big Data, Statistics & machine learning, Large scale & distributed IT infrastructure, Success stories, Project management
    Submitted at 26-Feb-2014 14:38 by Michel Lutz
    Accepted
    24-Sep-2014 10:55 A Feedback on Three Years of Big Data Analysis in France
    For three years, we have been working on innovative projects with major companies, trying to improve their large scale data-crunching practices. Such “Big Data” projects involve both analytics and computer science skills, in order to master statistical and machine learning methods, and to set up distributed IT infrastructures such as Hadoop. We propose a feedback on our experiences in the French Big Data market. Thanks to a multi-skilled team able to understand business problems, define appropriate statistical analyzes and perform implementations in a Big Data context, we have achieved great successes in sectors such as industry, banking or insurance. A sample of these success stories, with recommendations to efficiently manage Big Data projects at the technical and project levels, will be presented.
  • Split Plots Pros and Cons

    Authors: Pat Whitcomb (Stat-Ease, Inc.)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Design and analysis of experiments
    Keywords: Design of Experiments, DoE, Split-plot design, Restricted randomization, Power
    Submitted at 26-Feb-2014 23:08 by Pat Whitcomb
    Accepted (view paper)
    23-Sep-2014 14:20 Split Plots Pros and Cons
    This talk uses a series of case studies that illustrate the pros and cons of running factorial split-plot designs. The pros include:
    - Practical: Randomizing hard-to-change factors in groups rather, than randomizing every run, is much less labor and time intensive.
    - Malleable: Factors that naturally have large experimental units can be easily combined with factors having smaller experimental units.
    - More powerful: Tests for the subplot effects from the easy-to-change factors have higher power due to partitioning the variance sources.
    - Adaptable: New treatments can be introduced to experiments that are already in progress

    The cons include:
    - Less powerful: Tests for the hard-to-change factors are less powerful, having a larger variance to test against and fewer changes to help overcome the larger error.
    - Unfamiliar: Analysis requires specialized methods to cope with partitioned variance sources.
    - Different: Hard-to-change (whole-plot) and easy-to-change (subplot) factor effects are tested against different estimated noise. This can result in large whole-plot effects not being statistically significant, whereas small subplot effects are significant even though they may not be practically important.