ENBIS-16 in Sheffield

11 – 15 September 2016; Sheffield Abstract submission: 20 March – 4 July 2016

My abstracts


The following abstracts have been accepted for this event:

  • Well-Posedness Conditions in Stochastic Inversion Problems

    Authors: Nicolas Bousquet (EDF R&D), Mélanie Blazère (EDF R&D and IMT)
    Primary area of focus / application: Other: SFdS
    Secondary area of focus / application: Modelling
    Keywords: Well-posedness, stochastic inversion, calibration of input parameters, Fisher information, Sobol indices, Prior constraints
    Submitted at 27-Apr-2016 14:34 by Nicolas Bousquet
    13-Sep-2016 12:40 Well-Posedness Conditions in Stochastic Inversion Problems
    We consider a stochastic inversion problem defined by the knowledge of observations which are assumed to be realizations of a random variable Y* such that Y* = Y + u, and Y = g(X), where X is a random Gaussian vector with unknown parameters and u is an (experimental or/and process) noise with known distribution fu, and g is some deterministic function (possibly a black-box computer model). This inversion problem can be solved in frequentist or Bayesian frameworks (possibly by linearizing g), using missing data algorithms. To be solved it requires that several conditions of well-posedness and identifiability are gathered. One of them is adressed in this talk, which arises from predicted sensivity analysis. Imagine that the problem is solved. Any sensitivity study should highlight that the main source of uncertainty, explaining the variations of Y*, is X and not u. In practice, this diagnostic is established a posteriori, as a check for an estimated solution. However, it is more than desirable that this applies as a calibration constraint. This talk will propose a new general rule in the case when g is linearizable, based on Fisher information, and makes strong links with Sobol' indices. The approach is tested over toy examples and a hydraulical example, and compared with usual stochastic inversion methodologies that do not consider this well-posedness condition a priori.
  • Load Control Demand Reduction Estimation in the Context of a Smart Grid Demonstrator

    Authors: Paul Haering (EIFER)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Design and analysis of experiments
    Keywords: Load control, Demand reduction estimation, Smart grid demonstrator, Selection algorithm, Baseline load, Demand response
    Submitted at 28-Apr-2016 08:55 by Paul Haering
    14-Sep-2016 10:30 Load Control Demand Reduction Estimation in the Context of a Smart Grid Demonstrator
    In the context of energy planning and smart grids, one of the major issue of demand side management is to estimate the baseline, the consumption which would have been measured without load control, in order to assess the load power reduction reflecting how participants respond to load shedding requests. This is a major challenge to overcome to better integrate demand response in electricity market and thus allow high penetration of intermittent renewable energy.
    A few years ago, the French energy regulatory commission set up rules, the latest version of which is NEBEF 2.1, which defines the baseline estimation methods that are valid for calculating load power reduction. It also support the testing of new methods that permit an accurate estimation of load management actions.
    An innovative method which is based on an algorithm inspired by L. Hatton et al. has been tested in the frame of a French Smart Grid demonstration project. It relies on the selection of historical load curves, such that the distance between their average load and the load profile measured hours before and after the load shedding request is minimal. Although the method has been applied only to limited data sets, it has shown promising results so far.
  • Building Bridges between High-School and University Statistics Teaching: the Positive Experience of PLS (Piano Lauree Scientifiche)

    Authors: Stefania Mignani (Department of Statistical Sciences University of Bologna), Maria Gabriella Ottaviani (University of Rome, La Sapienza)
    Primary area of focus / application: Education & Thinking
    Secondary area of focus / application: Education & Thinking
    Keywords: Teaching statistics, Statistical literacy, Teacher training, Education gaps
    Submitted at 28-Apr-2016 12:13 by Stefania Mignani
    The role of Statistics in school education in general, and the Italian school curriculum in particular, has been strengthened over several years. As an example, in 2010 the Italian Ministry of Education introduced statistics and probability contents into the mathematics curriculum under the domain named “Uncertainty and data”, a fundamental topic at all school levels. Consequently, students are expected to be adequately trained in statistics in preparation for academic programs and professional careers. We present in this paper experience gained in Italy with insights generalizable to school education in other contexts.
    In the Italian school system, few mathematics school teachers have been trained in or are familiar with statistical concepts and methods, nor are they competent in selecting relevant, useful, and meaningful real examples. As a consequence, many Italian school students consider statistics as a boring discipline. This negatively affect their attitudes towards statistics at the university, where an introductory statistics course is typically given in a very scientific and mathematically structured approach. Trying to face this problem, several Italian statisticians are actively taking part in the “Piano Lauree Scientifiche (PLS)”, a national scientific high school degree project. This initiative started in 2005 in order to increase enrolments to scientific university degrees by promoting joint initiatives between high schools and universities. Specifically, at Bologna University, a team of teachers supports both activities for teacher training and laboratories for high-school students emphasizing a data-oriented approach to enhance statistical thinking and reasoning. We present here our experience in this initiative and examples of good practice for increasing the synergy between university and high school, with respect to the teaching of statistics.
  • Issues with Cleaning, Managing and Analysing Integrated Data - An Example from the Utilities Sector

    Authors: Warren Yabsley (Newcastle University and Enzen Global), Shirley Coleman (Newcastle University)
    Primary area of focus / application: Other: Big Data for SMEs
    Secondary area of focus / application: Business
    Keywords: Business readiness, Knowledge transfer, Data visualisation, Data readiness, RapidMiner
    Submitted at 28-Apr-2016 13:19 by Warren Yabsley
    Accepted (view paper)
    13-Sep-2016 09:00 Issues with Cleaning, Managing and Analysing Integrated Data - An Example from the Utilities Sector
    Analysis of Big Data can be daunting for SMEs. Where do you start? How is it performed? What infrastructure is required? Essential components of a project to statisticians may seem inconsequential to business managers and vice versa. What comes as second nature to one team member might not apply to others with assumptions related to key aspects vastly different.

    By addressing the fundamental requirements for successful data analysis, building blocks are established that allow progression to larger data sets. With reference to the utilities sector, the role of the data analyst as an intermediate between data and analysis will be addressed.

    Subject matters will include opportunities and benefits of data analysis, data readiness levels, achieving transparency and continuity within small teams and the importance of data visualisation and explanation of results to non-specialists. Software, such as RapidMiner, will be discussed that lends itself to SME use from the free version for initial, exploratory work that advances through to licenced Hadoop functionality for Big Data as the SME advances and evolves.
  • Instilling Quality Knowledge and Methods in Industry

    Authors: Avner Halevy (University of Haifa)
    Primary area of focus / application: Quality
    Secondary area of focus / application: Process
    Keywords: Quality science, Quality methods, Quality implementation, Integrated algorithms
    Submitted at 28-Apr-2016 16:33 by Avner Halevy
    13-Sep-2016 16:20 Instilling Quality Knowledge and Methods in Industry
    The gap between the level of academically developed statistical methods and tools and the level of their adoption in industry is frustratingly large. Both academicians and practitioners published mottos such as “the quality management movement has lost its way”; “is our current understanding of quality sufficient?”; “statistical process control: should we keep on fighting a lost battle?” and more.
    Externally imposed quality systems, such as certifications, accreditations or packages of tools & methods, do cause noticeable impacts at first but later decline and fizzle out. These interventional systems did not mature into a sustainable universal industrial norm of continually increasing quality performance.
    As a first general step towards changing this situation, a roadmap, a set of algorithms towards well defined goals, will be suggested. Important landmarks along this road will be discussed:
    a. A clear, simple and direct definition of quality, to be adopted by all involved.
    b. A clear notion of the multi-disciplinary field named “Quality Science”.
    c. A clear definition and sense of our mission in local and global economies.
    d. Suggested integrated algorithms for successful implementation of quality methods and tools in industry will be presented.
    e. A step-by-step description of the implementation algorithm will be presented, using principles and real-life examples from successful implementations in manufacturing and in service industries. Examples of failures will be discussed too and their causes will be studied. A list of fundamental and critical multidisciplinary actions will be given.
  • A Synergistic Blend of Multivariate Analysis Methods with Design of Experiments Tools

    Authors: Pat Whitcomb (Stat-Ease, Inc.)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Design and analysis of experiments
    Keywords: MultiVariate Analysis (MVA), Principal Component Analysis (PCA), Design of Experiments (DOE), Optimal design
    Submitted at 28-Apr-2016 19:36 by Pat Whitcomb
    12-Sep-2016 14:00 A Synergistic Blend of Multivariate Analysis Methods with Design of Experiments Tools
    This talk illustrates how multivariate analysis (MVA) methods in combination with design of experiment tools can be deployed to pinpoint optimal material properties. It demonstrates how to build an optimal design from principal components that span the underlying latent structures and use the DOE results to find an optimal compound via the following steps:
    -Select a set of material descriptors for a number of chemical compounds.
    -Run PCA to model material properties and assess the underlying dimensionality.
    -Select particular compounds for an optimal DOE.
    -Perform experiments and enter the response in the DOE.
    -Find the compounds closest to the optimum and verify.
    Attendees will come away with a better understanding of how to combine MVA and DOE.